00:00:00.002 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 627 00:00:00.002 originally caused by: 00:00:00.003 Started by upstream project "nightly-trigger" build number 3293 00:00:00.003 originally caused by: 00:00:00.003 Started by timer 00:00:00.003 Started by timer 00:00:00.082 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.083 The recommended git tool is: git 00:00:00.083 using credential 00000000-0000-0000-0000-000000000002 00:00:00.085 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.133 Fetching changes from the remote Git repository 00:00:00.134 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.194 Using shallow fetch with depth 1 00:00:00.194 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.194 > git --version # timeout=10 00:00:00.238 > git --version # 'git version 2.39.2' 00:00:00.238 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.261 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.261 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.439 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.451 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.463 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:05.463 > git config core.sparsecheckout # timeout=10 00:00:05.473 > git read-tree -mu HEAD # timeout=10 00:00:05.491 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:05.521 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:05.521 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:05.595 [Pipeline] Start of Pipeline 00:00:05.610 [Pipeline] library 00:00:05.612 Loading library shm_lib@master 00:00:05.612 Library shm_lib@master is cached. Copying from home. 00:00:05.632 [Pipeline] node 00:00:05.641 Running on WFP22 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.644 [Pipeline] { 00:00:05.657 [Pipeline] catchError 00:00:05.659 [Pipeline] { 00:00:05.673 [Pipeline] wrap 00:00:05.682 [Pipeline] { 00:00:05.692 [Pipeline] stage 00:00:05.694 [Pipeline] { (Prologue) 00:00:05.891 [Pipeline] sh 00:00:06.176 + logger -p user.info -t JENKINS-CI 00:00:06.195 [Pipeline] echo 00:00:06.197 Node: WFP22 00:00:06.205 [Pipeline] sh 00:00:06.504 [Pipeline] setCustomBuildProperty 00:00:06.514 [Pipeline] echo 00:00:06.516 Cleanup processes 00:00:06.520 [Pipeline] sh 00:00:06.798 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.798 2914598 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.811 [Pipeline] sh 00:00:07.094 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.094 ++ grep -v 'sudo pgrep' 00:00:07.094 ++ awk '{print $1}' 00:00:07.094 + sudo kill -9 00:00:07.094 + true 00:00:07.110 [Pipeline] cleanWs 00:00:07.120 [WS-CLEANUP] Deleting project workspace... 00:00:07.120 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.128 [WS-CLEANUP] done 00:00:07.131 [Pipeline] setCustomBuildProperty 00:00:07.146 [Pipeline] sh 00:00:07.429 + sudo git config --global --replace-all safe.directory '*' 00:00:07.512 [Pipeline] httpRequest 00:00:07.561 [Pipeline] echo 00:00:07.563 Sorcerer 10.211.164.101 is alive 00:00:07.573 [Pipeline] httpRequest 00:00:07.579 HttpMethod: GET 00:00:07.579 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.580 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.590 Response Code: HTTP/1.1 200 OK 00:00:07.590 Success: Status code 200 is in the accepted range: 200,404 00:00:07.591 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:10.583 [Pipeline] sh 00:00:10.863 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:10.879 [Pipeline] httpRequest 00:00:10.901 [Pipeline] echo 00:00:10.903 Sorcerer 10.211.164.101 is alive 00:00:10.913 [Pipeline] httpRequest 00:00:10.918 HttpMethod: GET 00:00:10.919 URL: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:10.919 Sending request to url: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:10.932 Response Code: HTTP/1.1 200 OK 00:00:10.932 Success: Status code 200 is in the accepted range: 200,404 00:00:10.933 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:29.575 [Pipeline] sh 00:00:29.860 + tar --no-same-owner -xf spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:32.409 [Pipeline] sh 00:00:32.693 + git -C spdk log --oneline -n5 00:00:32.693 dbef7efac test: fix dpdk builds on ubuntu24 00:00:32.693 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:00:32.693 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:00:32.693 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:00:32.693 e03c164a1 nvme: add nvme_ctrlr_lock 00:00:32.709 [Pipeline] withCredentials 00:00:32.719 > git --version # timeout=10 00:00:32.730 > git --version # 'git version 2.39.2' 00:00:32.748 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:32.750 [Pipeline] { 00:00:32.756 [Pipeline] retry 00:00:32.757 [Pipeline] { 00:00:32.769 [Pipeline] sh 00:00:33.048 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:33.629 [Pipeline] } 00:00:33.657 [Pipeline] // retry 00:00:33.663 [Pipeline] } 00:00:33.686 [Pipeline] // withCredentials 00:00:33.696 [Pipeline] httpRequest 00:00:33.717 [Pipeline] echo 00:00:33.719 Sorcerer 10.211.164.101 is alive 00:00:33.728 [Pipeline] httpRequest 00:00:33.734 HttpMethod: GET 00:00:33.734 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:33.735 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:33.742 Response Code: HTTP/1.1 200 OK 00:00:33.742 Success: Status code 200 is in the accepted range: 200,404 00:00:33.743 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:39.508 [Pipeline] sh 00:00:39.790 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:41.202 [Pipeline] sh 00:00:41.481 + git -C dpdk log --oneline -n5 00:00:41.481 eeb0605f11 version: 23.11.0 00:00:41.481 238778122a doc: update release notes for 23.11 00:00:41.481 46aa6b3cfc doc: fix description of RSS features 00:00:41.481 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:41.481 7e421ae345 devtools: support skipping forbid rule check 00:00:41.493 [Pipeline] } 00:00:41.513 [Pipeline] // stage 00:00:41.524 [Pipeline] stage 00:00:41.526 [Pipeline] { (Prepare) 00:00:41.548 [Pipeline] writeFile 00:00:41.566 [Pipeline] sh 00:00:41.850 + logger -p user.info -t JENKINS-CI 00:00:41.865 [Pipeline] sh 00:00:42.149 + logger -p user.info -t JENKINS-CI 00:00:42.163 [Pipeline] sh 00:00:42.448 + cat autorun-spdk.conf 00:00:42.448 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:42.448 SPDK_TEST_NVMF=1 00:00:42.448 SPDK_TEST_NVME_CLI=1 00:00:42.448 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:42.448 SPDK_TEST_NVMF_NICS=e810 00:00:42.448 SPDK_TEST_VFIOUSER=1 00:00:42.448 SPDK_RUN_UBSAN=1 00:00:42.448 NET_TYPE=phy 00:00:42.448 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:42.448 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:42.455 RUN_NIGHTLY=1 00:00:42.461 [Pipeline] readFile 00:00:42.488 [Pipeline] withEnv 00:00:42.491 [Pipeline] { 00:00:42.505 [Pipeline] sh 00:00:42.790 + set -ex 00:00:42.790 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:42.790 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:42.790 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:42.790 ++ SPDK_TEST_NVMF=1 00:00:42.790 ++ SPDK_TEST_NVME_CLI=1 00:00:42.790 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:42.790 ++ SPDK_TEST_NVMF_NICS=e810 00:00:42.790 ++ SPDK_TEST_VFIOUSER=1 00:00:42.791 ++ SPDK_RUN_UBSAN=1 00:00:42.791 ++ NET_TYPE=phy 00:00:42.791 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:42.791 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:42.791 ++ RUN_NIGHTLY=1 00:00:42.791 + case $SPDK_TEST_NVMF_NICS in 00:00:42.791 + DRIVERS=ice 00:00:42.791 + [[ tcp == \r\d\m\a ]] 00:00:42.791 + [[ -n ice ]] 00:00:42.791 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:42.791 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:42.791 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:42.791 rmmod: ERROR: Module irdma is not currently loaded 00:00:42.791 rmmod: ERROR: Module i40iw is not currently loaded 00:00:42.791 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:42.791 + true 00:00:42.791 + for D in $DRIVERS 00:00:42.791 + sudo modprobe ice 00:00:42.791 + exit 0 00:00:42.801 [Pipeline] } 00:00:42.822 [Pipeline] // withEnv 00:00:42.828 [Pipeline] } 00:00:42.844 [Pipeline] // stage 00:00:42.854 [Pipeline] catchError 00:00:42.856 [Pipeline] { 00:00:42.872 [Pipeline] timeout 00:00:42.872 Timeout set to expire in 50 min 00:00:42.874 [Pipeline] { 00:00:42.889 [Pipeline] stage 00:00:42.890 [Pipeline] { (Tests) 00:00:42.902 [Pipeline] sh 00:00:43.184 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:43.184 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:43.184 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:43.184 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:43.184 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:43.184 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:43.184 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:43.184 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:43.184 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:43.184 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:43.184 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:43.184 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:43.184 + source /etc/os-release 00:00:43.184 ++ NAME='Fedora Linux' 00:00:43.184 ++ VERSION='38 (Cloud Edition)' 00:00:43.184 ++ ID=fedora 00:00:43.184 ++ VERSION_ID=38 00:00:43.184 ++ VERSION_CODENAME= 00:00:43.184 ++ PLATFORM_ID=platform:f38 00:00:43.184 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:43.184 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:43.184 ++ LOGO=fedora-logo-icon 00:00:43.184 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:43.184 ++ HOME_URL=https://fedoraproject.org/ 00:00:43.184 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:43.184 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:43.184 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:43.184 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:43.184 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:43.184 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:43.184 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:43.184 ++ SUPPORT_END=2024-05-14 00:00:43.184 ++ VARIANT='Cloud Edition' 00:00:43.184 ++ VARIANT_ID=cloud 00:00:43.184 + uname -a 00:00:43.184 Linux spdk-wfp-22 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:43.184 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:45.746 Hugepages 00:00:45.746 node hugesize free / total 00:00:45.746 node0 1048576kB 0 / 0 00:00:45.746 node0 2048kB 0 / 0 00:00:45.746 node1 1048576kB 0 / 0 00:00:45.746 node1 2048kB 0 / 0 00:00:45.746 00:00:45.746 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:45.746 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:45.746 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:45.746 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:45.746 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:45.746 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:45.746 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:45.746 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:45.746 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:45.746 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:45.746 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:45.746 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:45.746 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:45.746 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:45.746 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:45.746 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:45.746 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:46.006 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:46.006 + rm -f /tmp/spdk-ld-path 00:00:46.006 + source autorun-spdk.conf 00:00:46.006 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.006 ++ SPDK_TEST_NVMF=1 00:00:46.006 ++ SPDK_TEST_NVME_CLI=1 00:00:46.006 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.006 ++ SPDK_TEST_NVMF_NICS=e810 00:00:46.006 ++ SPDK_TEST_VFIOUSER=1 00:00:46.006 ++ SPDK_RUN_UBSAN=1 00:00:46.006 ++ NET_TYPE=phy 00:00:46.006 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:46.006 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:46.006 ++ RUN_NIGHTLY=1 00:00:46.006 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:46.006 + [[ -n '' ]] 00:00:46.006 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:46.006 + for M in /var/spdk/build-*-manifest.txt 00:00:46.006 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:46.006 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:46.006 + for M in /var/spdk/build-*-manifest.txt 00:00:46.006 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:46.006 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:46.006 ++ uname 00:00:46.006 + [[ Linux == \L\i\n\u\x ]] 00:00:46.006 + sudo dmesg -T 00:00:46.006 + sudo dmesg --clear 00:00:46.006 + dmesg_pid=2915539 00:00:46.006 + [[ Fedora Linux == FreeBSD ]] 00:00:46.006 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:46.006 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:46.006 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:46.006 + [[ -x /usr/src/fio-static/fio ]] 00:00:46.006 + export FIO_BIN=/usr/src/fio-static/fio 00:00:46.006 + FIO_BIN=/usr/src/fio-static/fio 00:00:46.006 + sudo dmesg -Tw 00:00:46.006 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:46.006 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:46.006 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:46.006 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:46.006 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:46.006 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:46.006 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:46.006 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:46.006 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:46.006 Test configuration: 00:00:46.006 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.006 SPDK_TEST_NVMF=1 00:00:46.006 SPDK_TEST_NVME_CLI=1 00:00:46.006 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.006 SPDK_TEST_NVMF_NICS=e810 00:00:46.006 SPDK_TEST_VFIOUSER=1 00:00:46.006 SPDK_RUN_UBSAN=1 00:00:46.006 NET_TYPE=phy 00:00:46.006 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:46.006 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:46.267 RUN_NIGHTLY=1 22:46:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:46.267 22:46:18 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:46.267 22:46:18 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:46.267 22:46:18 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:46.267 22:46:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.267 22:46:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.267 22:46:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.267 22:46:18 -- paths/export.sh@5 -- $ export PATH 00:00:46.267 22:46:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.267 22:46:18 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:46.267 22:46:18 -- common/autobuild_common.sh@438 -- $ date +%s 00:00:46.267 22:46:18 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721853978.XXXXXX 00:00:46.267 22:46:18 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721853978.v8wppu 00:00:46.267 22:46:18 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:00:46.267 22:46:18 -- common/autobuild_common.sh@444 -- $ '[' -n v23.11 ']' 00:00:46.267 22:46:18 -- common/autobuild_common.sh@445 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:46.267 22:46:18 -- common/autobuild_common.sh@445 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:00:46.267 22:46:18 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:46.267 22:46:18 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:46.267 22:46:18 -- common/autobuild_common.sh@454 -- $ get_config_params 00:00:46.267 22:46:18 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:00:46.267 22:46:18 -- common/autotest_common.sh@10 -- $ set +x 00:00:46.267 22:46:18 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:00:46.267 22:46:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:46.267 22:46:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:46.267 22:46:18 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:46.267 22:46:18 -- spdk/autobuild.sh@16 -- $ date -u 00:00:46.267 Wed Jul 24 08:46:18 PM UTC 2024 00:00:46.267 22:46:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:46.267 LTS-60-gdbef7efac 00:00:46.267 22:46:18 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:46.267 22:46:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:46.267 22:46:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:46.267 22:46:18 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:00:46.267 22:46:18 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:00:46.267 22:46:18 -- common/autotest_common.sh@10 -- $ set +x 00:00:46.267 ************************************ 00:00:46.267 START TEST ubsan 00:00:46.267 ************************************ 00:00:46.267 22:46:18 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:00:46.267 using ubsan 00:00:46.267 00:00:46.267 real 0m0.000s 00:00:46.267 user 0m0.000s 00:00:46.267 sys 0m0.000s 00:00:46.267 22:46:18 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:46.267 22:46:18 -- common/autotest_common.sh@10 -- $ set +x 00:00:46.267 ************************************ 00:00:46.267 END TEST ubsan 00:00:46.267 ************************************ 00:00:46.267 22:46:18 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:00:46.267 22:46:18 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:00:46.267 22:46:18 -- common/autobuild_common.sh@430 -- $ run_test build_native_dpdk _build_native_dpdk 00:00:46.267 22:46:18 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:00:46.267 22:46:18 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:00:46.267 22:46:18 -- common/autotest_common.sh@10 -- $ set +x 00:00:46.267 ************************************ 00:00:46.267 START TEST build_native_dpdk 00:00:46.267 ************************************ 00:00:46.267 22:46:18 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:00:46.267 22:46:18 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:00:46.267 22:46:18 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:00:46.267 22:46:18 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:00:46.267 22:46:18 -- common/autobuild_common.sh@51 -- $ local compiler 00:00:46.267 22:46:18 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:00:46.267 22:46:18 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:00:46.267 22:46:18 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:00:46.267 22:46:18 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:00:46.267 22:46:18 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:00:46.267 22:46:18 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:00:46.267 22:46:18 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:00:46.267 22:46:18 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:00:46.267 22:46:18 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:00:46.267 22:46:18 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:00:46.267 22:46:18 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:46.267 22:46:18 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:46.267 22:46:18 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:46.267 22:46:18 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:00:46.267 22:46:18 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:46.267 22:46:18 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:00:46.267 eeb0605f11 version: 23.11.0 00:00:46.267 238778122a doc: update release notes for 23.11 00:00:46.267 46aa6b3cfc doc: fix description of RSS features 00:00:46.267 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:46.267 7e421ae345 devtools: support skipping forbid rule check 00:00:46.267 22:46:18 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:00:46.267 22:46:18 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:00:46.267 22:46:18 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:00:46.267 22:46:18 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:00:46.267 22:46:18 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:00:46.267 22:46:18 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:00:46.267 22:46:18 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:00:46.267 22:46:18 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:00:46.267 22:46:18 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:00:46.267 22:46:18 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:00:46.267 22:46:18 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:00:46.267 22:46:18 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:46.267 22:46:18 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:46.267 22:46:18 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:00:46.267 22:46:18 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:46.267 22:46:18 -- common/autobuild_common.sh@168 -- $ uname -s 00:00:46.267 22:46:18 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:00:46.267 22:46:18 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:00:46.267 22:46:18 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:00:46.268 22:46:18 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:00:46.268 22:46:18 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:00:46.268 22:46:18 -- scripts/common.sh@335 -- $ IFS=.-: 00:00:46.268 22:46:18 -- scripts/common.sh@335 -- $ read -ra ver1 00:00:46.268 22:46:18 -- scripts/common.sh@336 -- $ IFS=.-: 00:00:46.268 22:46:18 -- scripts/common.sh@336 -- $ read -ra ver2 00:00:46.268 22:46:18 -- scripts/common.sh@337 -- $ local 'op=<' 00:00:46.268 22:46:18 -- scripts/common.sh@339 -- $ ver1_l=3 00:00:46.268 22:46:18 -- scripts/common.sh@340 -- $ ver2_l=3 00:00:46.268 22:46:18 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:00:46.268 22:46:18 -- scripts/common.sh@343 -- $ case "$op" in 00:00:46.268 22:46:18 -- scripts/common.sh@344 -- $ : 1 00:00:46.268 22:46:18 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:00:46.268 22:46:18 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:46.268 22:46:18 -- scripts/common.sh@364 -- $ decimal 23 00:00:46.268 22:46:18 -- scripts/common.sh@352 -- $ local d=23 00:00:46.268 22:46:18 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:00:46.268 22:46:18 -- scripts/common.sh@354 -- $ echo 23 00:00:46.268 22:46:18 -- scripts/common.sh@364 -- $ ver1[v]=23 00:00:46.268 22:46:18 -- scripts/common.sh@365 -- $ decimal 21 00:00:46.268 22:46:18 -- scripts/common.sh@352 -- $ local d=21 00:00:46.268 22:46:18 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:00:46.268 22:46:18 -- scripts/common.sh@354 -- $ echo 21 00:00:46.268 22:46:18 -- scripts/common.sh@365 -- $ ver2[v]=21 00:00:46.268 22:46:18 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:00:46.268 22:46:18 -- scripts/common.sh@366 -- $ return 1 00:00:46.268 22:46:18 -- common/autobuild_common.sh@173 -- $ patch -p1 00:00:46.268 patching file config/rte_config.h 00:00:46.268 Hunk #1 succeeded at 60 (offset 1 line). 00:00:46.268 22:46:18 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:00:46.268 22:46:18 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:00:46.268 22:46:18 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:00:46.268 22:46:18 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:00:46.268 22:46:18 -- scripts/common.sh@335 -- $ IFS=.-: 00:00:46.268 22:46:18 -- scripts/common.sh@335 -- $ read -ra ver1 00:00:46.268 22:46:18 -- scripts/common.sh@336 -- $ IFS=.-: 00:00:46.268 22:46:18 -- scripts/common.sh@336 -- $ read -ra ver2 00:00:46.268 22:46:18 -- scripts/common.sh@337 -- $ local 'op=<' 00:00:46.268 22:46:18 -- scripts/common.sh@339 -- $ ver1_l=3 00:00:46.268 22:46:18 -- scripts/common.sh@340 -- $ ver2_l=3 00:00:46.268 22:46:18 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:00:46.268 22:46:18 -- scripts/common.sh@343 -- $ case "$op" in 00:00:46.268 22:46:18 -- scripts/common.sh@344 -- $ : 1 00:00:46.268 22:46:18 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:00:46.268 22:46:18 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:46.268 22:46:18 -- scripts/common.sh@364 -- $ decimal 23 00:00:46.268 22:46:18 -- scripts/common.sh@352 -- $ local d=23 00:00:46.268 22:46:18 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:00:46.268 22:46:18 -- scripts/common.sh@354 -- $ echo 23 00:00:46.268 22:46:18 -- scripts/common.sh@364 -- $ ver1[v]=23 00:00:46.268 22:46:18 -- scripts/common.sh@365 -- $ decimal 24 00:00:46.268 22:46:18 -- scripts/common.sh@352 -- $ local d=24 00:00:46.268 22:46:18 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:00:46.268 22:46:18 -- scripts/common.sh@354 -- $ echo 24 00:00:46.268 22:46:18 -- scripts/common.sh@365 -- $ ver2[v]=24 00:00:46.268 22:46:18 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:00:46.268 22:46:18 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:00:46.268 22:46:18 -- scripts/common.sh@367 -- $ return 0 00:00:46.268 22:46:18 -- common/autobuild_common.sh@177 -- $ patch -p1 00:00:46.268 patching file lib/pcapng/rte_pcapng.c 00:00:46.268 22:46:18 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:00:46.268 22:46:18 -- common/autobuild_common.sh@181 -- $ uname -s 00:00:46.268 22:46:18 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:00:46.268 22:46:18 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:00:46.268 22:46:18 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:51.540 The Meson build system 00:00:51.540 Version: 1.3.1 00:00:51.540 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:51.540 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:00:51.540 Build type: native build 00:00:51.540 Program cat found: YES (/usr/bin/cat) 00:00:51.540 Project name: DPDK 00:00:51.540 Project version: 23.11.0 00:00:51.540 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:51.540 C linker for the host machine: gcc ld.bfd 2.39-16 00:00:51.540 Host machine cpu family: x86_64 00:00:51.540 Host machine cpu: x86_64 00:00:51.540 Message: ## Building in Developer Mode ## 00:00:51.540 Program pkg-config found: YES (/usr/bin/pkg-config) 00:00:51.540 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:00:51.540 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:00:51.540 Program python3 found: YES (/usr/bin/python3) 00:00:51.540 Program cat found: YES (/usr/bin/cat) 00:00:51.540 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:00:51.540 Compiler for C supports arguments -march=native: YES 00:00:51.540 Checking for size of "void *" : 8 00:00:51.540 Checking for size of "void *" : 8 (cached) 00:00:51.540 Library m found: YES 00:00:51.540 Library numa found: YES 00:00:51.540 Has header "numaif.h" : YES 00:00:51.540 Library fdt found: NO 00:00:51.540 Library execinfo found: NO 00:00:51.540 Has header "execinfo.h" : YES 00:00:51.540 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:51.540 Run-time dependency libarchive found: NO (tried pkgconfig) 00:00:51.540 Run-time dependency libbsd found: NO (tried pkgconfig) 00:00:51.540 Run-time dependency jansson found: NO (tried pkgconfig) 00:00:51.540 Run-time dependency openssl found: YES 3.0.9 00:00:51.540 Run-time dependency libpcap found: YES 1.10.4 00:00:51.540 Has header "pcap.h" with dependency libpcap: YES 00:00:51.540 Compiler for C supports arguments -Wcast-qual: YES 00:00:51.540 Compiler for C supports arguments -Wdeprecated: YES 00:00:51.540 Compiler for C supports arguments -Wformat: YES 00:00:51.540 Compiler for C supports arguments -Wformat-nonliteral: NO 00:00:51.540 Compiler for C supports arguments -Wformat-security: NO 00:00:51.540 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:51.540 Compiler for C supports arguments -Wmissing-prototypes: YES 00:00:51.540 Compiler for C supports arguments -Wnested-externs: YES 00:00:51.540 Compiler for C supports arguments -Wold-style-definition: YES 00:00:51.540 Compiler for C supports arguments -Wpointer-arith: YES 00:00:51.540 Compiler for C supports arguments -Wsign-compare: YES 00:00:51.540 Compiler for C supports arguments -Wstrict-prototypes: YES 00:00:51.540 Compiler for C supports arguments -Wundef: YES 00:00:51.540 Compiler for C supports arguments -Wwrite-strings: YES 00:00:51.540 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:00:51.540 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:00:51.540 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:51.541 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:00:51.541 Program objdump found: YES (/usr/bin/objdump) 00:00:51.541 Compiler for C supports arguments -mavx512f: YES 00:00:51.541 Checking if "AVX512 checking" compiles: YES 00:00:51.541 Fetching value of define "__SSE4_2__" : 1 00:00:51.541 Fetching value of define "__AES__" : 1 00:00:51.541 Fetching value of define "__AVX__" : 1 00:00:51.541 Fetching value of define "__AVX2__" : 1 00:00:51.541 Fetching value of define "__AVX512BW__" : 1 00:00:51.541 Fetching value of define "__AVX512CD__" : 1 00:00:51.541 Fetching value of define "__AVX512DQ__" : 1 00:00:51.541 Fetching value of define "__AVX512F__" : 1 00:00:51.541 Fetching value of define "__AVX512VL__" : 1 00:00:51.541 Fetching value of define "__PCLMUL__" : 1 00:00:51.541 Fetching value of define "__RDRND__" : 1 00:00:51.541 Fetching value of define "__RDSEED__" : 1 00:00:51.541 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:00:51.541 Fetching value of define "__znver1__" : (undefined) 00:00:51.541 Fetching value of define "__znver2__" : (undefined) 00:00:51.541 Fetching value of define "__znver3__" : (undefined) 00:00:51.541 Fetching value of define "__znver4__" : (undefined) 00:00:51.541 Compiler for C supports arguments -Wno-format-truncation: YES 00:00:51.541 Message: lib/log: Defining dependency "log" 00:00:51.541 Message: lib/kvargs: Defining dependency "kvargs" 00:00:51.541 Message: lib/telemetry: Defining dependency "telemetry" 00:00:51.541 Checking for function "getentropy" : NO 00:00:51.541 Message: lib/eal: Defining dependency "eal" 00:00:51.541 Message: lib/ring: Defining dependency "ring" 00:00:51.541 Message: lib/rcu: Defining dependency "rcu" 00:00:51.541 Message: lib/mempool: Defining dependency "mempool" 00:00:51.541 Message: lib/mbuf: Defining dependency "mbuf" 00:00:51.541 Fetching value of define "__PCLMUL__" : 1 (cached) 00:00:51.541 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:51.541 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:51.541 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:00:51.541 Fetching value of define "__AVX512VL__" : 1 (cached) 00:00:51.541 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:00:51.541 Compiler for C supports arguments -mpclmul: YES 00:00:51.541 Compiler for C supports arguments -maes: YES 00:00:51.541 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:51.541 Compiler for C supports arguments -mavx512bw: YES 00:00:51.541 Compiler for C supports arguments -mavx512dq: YES 00:00:51.541 Compiler for C supports arguments -mavx512vl: YES 00:00:51.541 Compiler for C supports arguments -mvpclmulqdq: YES 00:00:51.541 Compiler for C supports arguments -mavx2: YES 00:00:51.541 Compiler for C supports arguments -mavx: YES 00:00:51.541 Message: lib/net: Defining dependency "net" 00:00:51.541 Message: lib/meter: Defining dependency "meter" 00:00:51.541 Message: lib/ethdev: Defining dependency "ethdev" 00:00:51.541 Message: lib/pci: Defining dependency "pci" 00:00:51.541 Message: lib/cmdline: Defining dependency "cmdline" 00:00:51.541 Message: lib/metrics: Defining dependency "metrics" 00:00:51.541 Message: lib/hash: Defining dependency "hash" 00:00:51.541 Message: lib/timer: Defining dependency "timer" 00:00:51.541 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:51.541 Fetching value of define "__AVX512VL__" : 1 (cached) 00:00:51.541 Fetching value of define "__AVX512CD__" : 1 (cached) 00:00:51.541 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:51.541 Message: lib/acl: Defining dependency "acl" 00:00:51.541 Message: lib/bbdev: Defining dependency "bbdev" 00:00:51.541 Message: lib/bitratestats: Defining dependency "bitratestats" 00:00:51.541 Run-time dependency libelf found: YES 0.190 00:00:51.541 Message: lib/bpf: Defining dependency "bpf" 00:00:51.541 Message: lib/cfgfile: Defining dependency "cfgfile" 00:00:51.541 Message: lib/compressdev: Defining dependency "compressdev" 00:00:51.541 Message: lib/cryptodev: Defining dependency "cryptodev" 00:00:51.541 Message: lib/distributor: Defining dependency "distributor" 00:00:51.541 Message: lib/dmadev: Defining dependency "dmadev" 00:00:51.541 Message: lib/efd: Defining dependency "efd" 00:00:51.541 Message: lib/eventdev: Defining dependency "eventdev" 00:00:51.541 Message: lib/dispatcher: Defining dependency "dispatcher" 00:00:51.541 Message: lib/gpudev: Defining dependency "gpudev" 00:00:51.541 Message: lib/gro: Defining dependency "gro" 00:00:51.541 Message: lib/gso: Defining dependency "gso" 00:00:51.541 Message: lib/ip_frag: Defining dependency "ip_frag" 00:00:51.541 Message: lib/jobstats: Defining dependency "jobstats" 00:00:51.541 Message: lib/latencystats: Defining dependency "latencystats" 00:00:51.541 Message: lib/lpm: Defining dependency "lpm" 00:00:51.541 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:51.541 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:00:51.541 Fetching value of define "__AVX512IFMA__" : (undefined) 00:00:51.541 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:00:51.541 Message: lib/member: Defining dependency "member" 00:00:51.541 Message: lib/pcapng: Defining dependency "pcapng" 00:00:51.541 Compiler for C supports arguments -Wno-cast-qual: YES 00:00:51.541 Message: lib/power: Defining dependency "power" 00:00:51.541 Message: lib/rawdev: Defining dependency "rawdev" 00:00:51.541 Message: lib/regexdev: Defining dependency "regexdev" 00:00:51.541 Message: lib/mldev: Defining dependency "mldev" 00:00:51.541 Message: lib/rib: Defining dependency "rib" 00:00:51.541 Message: lib/reorder: Defining dependency "reorder" 00:00:51.541 Message: lib/sched: Defining dependency "sched" 00:00:51.541 Message: lib/security: Defining dependency "security" 00:00:51.541 Message: lib/stack: Defining dependency "stack" 00:00:51.541 Has header "linux/userfaultfd.h" : YES 00:00:51.541 Has header "linux/vduse.h" : YES 00:00:51.541 Message: lib/vhost: Defining dependency "vhost" 00:00:51.541 Message: lib/ipsec: Defining dependency "ipsec" 00:00:51.541 Message: lib/pdcp: Defining dependency "pdcp" 00:00:51.541 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:51.541 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:00:51.541 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:51.541 Message: lib/fib: Defining dependency "fib" 00:00:51.541 Message: lib/port: Defining dependency "port" 00:00:51.541 Message: lib/pdump: Defining dependency "pdump" 00:00:51.541 Message: lib/table: Defining dependency "table" 00:00:51.541 Message: lib/pipeline: Defining dependency "pipeline" 00:00:51.541 Message: lib/graph: Defining dependency "graph" 00:00:51.541 Message: lib/node: Defining dependency "node" 00:00:51.542 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:00:52.122 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:00:52.122 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:00:52.122 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:00:52.122 Compiler for C supports arguments -Wno-sign-compare: YES 00:00:52.122 Compiler for C supports arguments -Wno-unused-value: YES 00:00:52.122 Compiler for C supports arguments -Wno-format: YES 00:00:52.122 Compiler for C supports arguments -Wno-format-security: YES 00:00:52.122 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:00:52.122 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:00:52.122 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:00:52.122 Compiler for C supports arguments -Wno-unused-parameter: YES 00:00:52.122 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:52.122 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:52.122 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:52.122 Compiler for C supports arguments -mavx512bw: YES (cached) 00:00:52.122 Compiler for C supports arguments -march=skylake-avx512: YES 00:00:52.122 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:00:52.122 Has header "sys/epoll.h" : YES 00:00:52.122 Program doxygen found: YES (/usr/bin/doxygen) 00:00:52.122 Configuring doxy-api-html.conf using configuration 00:00:52.122 Configuring doxy-api-man.conf using configuration 00:00:52.122 Program mandb found: YES (/usr/bin/mandb) 00:00:52.122 Program sphinx-build found: NO 00:00:52.122 Configuring rte_build_config.h using configuration 00:00:52.122 Message: 00:00:52.122 ================= 00:00:52.122 Applications Enabled 00:00:52.122 ================= 00:00:52.122 00:00:52.122 apps: 00:00:52.122 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:00:52.122 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:00:52.122 test-pmd, test-regex, test-sad, test-security-perf, 00:00:52.122 00:00:52.122 Message: 00:00:52.122 ================= 00:00:52.122 Libraries Enabled 00:00:52.122 ================= 00:00:52.122 00:00:52.122 libs: 00:00:52.122 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:00:52.122 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:00:52.122 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:00:52.122 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:00:52.122 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:00:52.122 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:00:52.122 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:00:52.122 00:00:52.122 00:00:52.122 Message: 00:00:52.122 =============== 00:00:52.122 Drivers Enabled 00:00:52.122 =============== 00:00:52.122 00:00:52.122 common: 00:00:52.122 00:00:52.122 bus: 00:00:52.122 pci, vdev, 00:00:52.122 mempool: 00:00:52.122 ring, 00:00:52.122 dma: 00:00:52.122 00:00:52.122 net: 00:00:52.122 i40e, 00:00:52.122 raw: 00:00:52.122 00:00:52.122 crypto: 00:00:52.122 00:00:52.122 compress: 00:00:52.122 00:00:52.122 regex: 00:00:52.122 00:00:52.122 ml: 00:00:52.122 00:00:52.122 vdpa: 00:00:52.122 00:00:52.122 event: 00:00:52.122 00:00:52.122 baseband: 00:00:52.122 00:00:52.122 gpu: 00:00:52.122 00:00:52.122 00:00:52.122 Message: 00:00:52.122 ================= 00:00:52.123 Content Skipped 00:00:52.123 ================= 00:00:52.123 00:00:52.123 apps: 00:00:52.123 00:00:52.123 libs: 00:00:52.123 00:00:52.123 drivers: 00:00:52.123 common/cpt: not in enabled drivers build config 00:00:52.123 common/dpaax: not in enabled drivers build config 00:00:52.123 common/iavf: not in enabled drivers build config 00:00:52.123 common/idpf: not in enabled drivers build config 00:00:52.123 common/mvep: not in enabled drivers build config 00:00:52.123 common/octeontx: not in enabled drivers build config 00:00:52.123 bus/auxiliary: not in enabled drivers build config 00:00:52.123 bus/cdx: not in enabled drivers build config 00:00:52.123 bus/dpaa: not in enabled drivers build config 00:00:52.123 bus/fslmc: not in enabled drivers build config 00:00:52.123 bus/ifpga: not in enabled drivers build config 00:00:52.123 bus/platform: not in enabled drivers build config 00:00:52.123 bus/vmbus: not in enabled drivers build config 00:00:52.123 common/cnxk: not in enabled drivers build config 00:00:52.123 common/mlx5: not in enabled drivers build config 00:00:52.123 common/nfp: not in enabled drivers build config 00:00:52.123 common/qat: not in enabled drivers build config 00:00:52.123 common/sfc_efx: not in enabled drivers build config 00:00:52.123 mempool/bucket: not in enabled drivers build config 00:00:52.123 mempool/cnxk: not in enabled drivers build config 00:00:52.123 mempool/dpaa: not in enabled drivers build config 00:00:52.123 mempool/dpaa2: not in enabled drivers build config 00:00:52.123 mempool/octeontx: not in enabled drivers build config 00:00:52.123 mempool/stack: not in enabled drivers build config 00:00:52.123 dma/cnxk: not in enabled drivers build config 00:00:52.123 dma/dpaa: not in enabled drivers build config 00:00:52.123 dma/dpaa2: not in enabled drivers build config 00:00:52.123 dma/hisilicon: not in enabled drivers build config 00:00:52.123 dma/idxd: not in enabled drivers build config 00:00:52.123 dma/ioat: not in enabled drivers build config 00:00:52.123 dma/skeleton: not in enabled drivers build config 00:00:52.123 net/af_packet: not in enabled drivers build config 00:00:52.123 net/af_xdp: not in enabled drivers build config 00:00:52.123 net/ark: not in enabled drivers build config 00:00:52.123 net/atlantic: not in enabled drivers build config 00:00:52.123 net/avp: not in enabled drivers build config 00:00:52.123 net/axgbe: not in enabled drivers build config 00:00:52.123 net/bnx2x: not in enabled drivers build config 00:00:52.123 net/bnxt: not in enabled drivers build config 00:00:52.123 net/bonding: not in enabled drivers build config 00:00:52.123 net/cnxk: not in enabled drivers build config 00:00:52.123 net/cpfl: not in enabled drivers build config 00:00:52.123 net/cxgbe: not in enabled drivers build config 00:00:52.123 net/dpaa: not in enabled drivers build config 00:00:52.123 net/dpaa2: not in enabled drivers build config 00:00:52.123 net/e1000: not in enabled drivers build config 00:00:52.123 net/ena: not in enabled drivers build config 00:00:52.123 net/enetc: not in enabled drivers build config 00:00:52.123 net/enetfec: not in enabled drivers build config 00:00:52.123 net/enic: not in enabled drivers build config 00:00:52.123 net/failsafe: not in enabled drivers build config 00:00:52.123 net/fm10k: not in enabled drivers build config 00:00:52.123 net/gve: not in enabled drivers build config 00:00:52.123 net/hinic: not in enabled drivers build config 00:00:52.123 net/hns3: not in enabled drivers build config 00:00:52.123 net/iavf: not in enabled drivers build config 00:00:52.123 net/ice: not in enabled drivers build config 00:00:52.123 net/idpf: not in enabled drivers build config 00:00:52.123 net/igc: not in enabled drivers build config 00:00:52.123 net/ionic: not in enabled drivers build config 00:00:52.123 net/ipn3ke: not in enabled drivers build config 00:00:52.123 net/ixgbe: not in enabled drivers build config 00:00:52.123 net/mana: not in enabled drivers build config 00:00:52.123 net/memif: not in enabled drivers build config 00:00:52.123 net/mlx4: not in enabled drivers build config 00:00:52.123 net/mlx5: not in enabled drivers build config 00:00:52.123 net/mvneta: not in enabled drivers build config 00:00:52.123 net/mvpp2: not in enabled drivers build config 00:00:52.123 net/netvsc: not in enabled drivers build config 00:00:52.123 net/nfb: not in enabled drivers build config 00:00:52.123 net/nfp: not in enabled drivers build config 00:00:52.123 net/ngbe: not in enabled drivers build config 00:00:52.123 net/null: not in enabled drivers build config 00:00:52.123 net/octeontx: not in enabled drivers build config 00:00:52.123 net/octeon_ep: not in enabled drivers build config 00:00:52.123 net/pcap: not in enabled drivers build config 00:00:52.123 net/pfe: not in enabled drivers build config 00:00:52.123 net/qede: not in enabled drivers build config 00:00:52.123 net/ring: not in enabled drivers build config 00:00:52.123 net/sfc: not in enabled drivers build config 00:00:52.123 net/softnic: not in enabled drivers build config 00:00:52.123 net/tap: not in enabled drivers build config 00:00:52.123 net/thunderx: not in enabled drivers build config 00:00:52.123 net/txgbe: not in enabled drivers build config 00:00:52.123 net/vdev_netvsc: not in enabled drivers build config 00:00:52.123 net/vhost: not in enabled drivers build config 00:00:52.123 net/virtio: not in enabled drivers build config 00:00:52.123 net/vmxnet3: not in enabled drivers build config 00:00:52.123 raw/cnxk_bphy: not in enabled drivers build config 00:00:52.123 raw/cnxk_gpio: not in enabled drivers build config 00:00:52.123 raw/dpaa2_cmdif: not in enabled drivers build config 00:00:52.123 raw/ifpga: not in enabled drivers build config 00:00:52.123 raw/ntb: not in enabled drivers build config 00:00:52.123 raw/skeleton: not in enabled drivers build config 00:00:52.123 crypto/armv8: not in enabled drivers build config 00:00:52.123 crypto/bcmfs: not in enabled drivers build config 00:00:52.123 crypto/caam_jr: not in enabled drivers build config 00:00:52.123 crypto/ccp: not in enabled drivers build config 00:00:52.123 crypto/cnxk: not in enabled drivers build config 00:00:52.123 crypto/dpaa_sec: not in enabled drivers build config 00:00:52.123 crypto/dpaa2_sec: not in enabled drivers build config 00:00:52.123 crypto/ipsec_mb: not in enabled drivers build config 00:00:52.123 crypto/mlx5: not in enabled drivers build config 00:00:52.123 crypto/mvsam: not in enabled drivers build config 00:00:52.123 crypto/nitrox: not in enabled drivers build config 00:00:52.123 crypto/null: not in enabled drivers build config 00:00:52.123 crypto/octeontx: not in enabled drivers build config 00:00:52.123 crypto/openssl: not in enabled drivers build config 00:00:52.123 crypto/scheduler: not in enabled drivers build config 00:00:52.123 crypto/uadk: not in enabled drivers build config 00:00:52.123 crypto/virtio: not in enabled drivers build config 00:00:52.123 compress/isal: not in enabled drivers build config 00:00:52.123 compress/mlx5: not in enabled drivers build config 00:00:52.123 compress/octeontx: not in enabled drivers build config 00:00:52.123 compress/zlib: not in enabled drivers build config 00:00:52.123 regex/mlx5: not in enabled drivers build config 00:00:52.123 regex/cn9k: not in enabled drivers build config 00:00:52.123 ml/cnxk: not in enabled drivers build config 00:00:52.123 vdpa/ifc: not in enabled drivers build config 00:00:52.123 vdpa/mlx5: not in enabled drivers build config 00:00:52.123 vdpa/nfp: not in enabled drivers build config 00:00:52.123 vdpa/sfc: not in enabled drivers build config 00:00:52.123 event/cnxk: not in enabled drivers build config 00:00:52.123 event/dlb2: not in enabled drivers build config 00:00:52.123 event/dpaa: not in enabled drivers build config 00:00:52.123 event/dpaa2: not in enabled drivers build config 00:00:52.123 event/dsw: not in enabled drivers build config 00:00:52.123 event/opdl: not in enabled drivers build config 00:00:52.123 event/skeleton: not in enabled drivers build config 00:00:52.123 event/sw: not in enabled drivers build config 00:00:52.123 event/octeontx: not in enabled drivers build config 00:00:52.123 baseband/acc: not in enabled drivers build config 00:00:52.123 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:00:52.123 baseband/fpga_lte_fec: not in enabled drivers build config 00:00:52.123 baseband/la12xx: not in enabled drivers build config 00:00:52.123 baseband/null: not in enabled drivers build config 00:00:52.123 baseband/turbo_sw: not in enabled drivers build config 00:00:52.123 gpu/cuda: not in enabled drivers build config 00:00:52.123 00:00:52.123 00:00:52.123 Build targets in project: 217 00:00:52.123 00:00:52.123 DPDK 23.11.0 00:00:52.123 00:00:52.123 User defined options 00:00:52.123 libdir : lib 00:00:52.123 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:52.123 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:00:52.123 c_link_args : 00:00:52.123 enable_docs : false 00:00:52.123 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:52.123 enable_kmods : false 00:00:52.123 machine : native 00:00:52.123 tests : false 00:00:52.123 00:00:52.123 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:52.123 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:00:52.123 22:46:24 -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j112 00:00:52.123 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:00:52.123 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:00:52.123 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:00:52.123 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:00:52.392 [4/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:00:52.392 [5/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:00:52.392 [6/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:00:52.392 [7/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:00:52.392 [8/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:00:52.392 [9/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:00:52.392 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:00:52.392 [11/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:00:52.392 [12/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:00:52.392 [13/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:00:52.392 [14/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:00:52.392 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:00:52.392 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:00:52.392 [17/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:00:52.392 [18/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:00:52.392 [19/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:00:52.392 [20/707] Linking static target lib/librte_kvargs.a 00:00:52.392 [21/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:00:52.392 [22/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:00:52.392 [23/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:00:52.392 [24/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:00:52.392 [25/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:00:52.392 [26/707] Linking static target lib/librte_pci.a 00:00:52.392 [27/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:00:52.392 [28/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:00:52.392 [29/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:00:52.392 [30/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:00:52.392 [31/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:00:52.677 [32/707] Linking static target lib/librte_log.a 00:00:52.677 [33/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:00:52.677 [34/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:00:52.677 [35/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:00:52.677 [36/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:00:52.677 [37/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.677 [38/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:00:52.677 [39/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.938 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:00:52.938 [41/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:00:52.938 [42/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:00:52.938 [43/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:00:52.938 [44/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:00:52.938 [45/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:00:52.938 [46/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:00:52.938 [47/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:00:52.938 [48/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:00:52.938 [49/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:00:52.938 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:00:52.938 [51/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:00:52.938 [52/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:00:52.938 [53/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:00:52.938 [54/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:00:52.938 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:00:52.938 [56/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:00:52.938 [57/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:00:52.938 [58/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:00:52.938 [59/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:00:52.938 [60/707] Linking static target lib/librte_meter.a 00:00:52.938 [61/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:00:52.938 [62/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:00:52.938 [63/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:00:52.939 [64/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:00:52.939 [65/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:00:52.939 [66/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:00:52.939 [67/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:00:52.939 [68/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:00:52.939 [69/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:00:52.939 [70/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:00:52.939 [71/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:00:52.939 [72/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:00:52.939 [73/707] Linking static target lib/librte_ring.a 00:00:52.939 [74/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:00:52.939 [75/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:00:52.939 [76/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:00:52.939 [77/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:00:52.939 [78/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:00:52.939 [79/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:00:52.939 [80/707] Linking static target lib/librte_cmdline.a 00:00:52.939 [81/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:00:52.939 [82/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:00:52.939 [83/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:00:52.939 [84/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:00:53.200 [85/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:00:53.200 [86/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:00:53.200 [87/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:00:53.200 [88/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:00:53.200 [89/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:00:53.200 [90/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:00:53.200 [91/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:00:53.200 [92/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:00:53.200 [93/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:00:53.200 [94/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:00:53.200 [95/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:00:53.200 [96/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:00:53.200 [97/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:00:53.200 [98/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:00:53.200 [99/707] Linking static target lib/librte_metrics.a 00:00:53.200 [100/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:00:53.200 [101/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:00:53.200 [102/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:00:53.200 [103/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:00:53.200 [104/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:00:53.200 [105/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:00:53.200 [106/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:00:53.200 [107/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:00:53.200 [108/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:00:53.200 [109/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:00:53.200 [110/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:00:53.200 [111/707] Linking static target lib/librte_net.a 00:00:53.200 [112/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:00:53.200 [113/707] Linking static target lib/librte_bitratestats.a 00:00:53.200 [114/707] Linking static target lib/librte_cfgfile.a 00:00:53.200 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:00:53.200 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:00:53.200 [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:00:53.200 [118/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:00:53.200 [119/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:00:53.200 [120/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.200 [121/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:00:53.200 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:00:53.200 [123/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:00:53.200 [124/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:00:53.466 [125/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:00:53.466 [126/707] Linking target lib/librte_log.so.24.0 00:00:53.466 [127/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.466 [128/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:00:53.466 [129/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:00:53.466 [130/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:00:53.466 [131/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:00:53.466 [132/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:00:53.466 [133/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:00:53.466 [134/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:00:53.466 [135/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:00:53.466 [136/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.466 [137/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:00:53.466 [138/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:00:53.466 [139/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:00:53.466 [140/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:00:53.466 [141/707] Linking static target lib/librte_timer.a 00:00:53.466 [142/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:00:53.466 [143/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:00:53.466 [144/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:00:53.466 [145/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:00:53.466 [146/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.466 [147/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:00:53.466 [148/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:00:53.466 [149/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:00:53.466 [150/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:00:53.466 [151/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:00:53.466 [152/707] Linking static target lib/librte_mempool.a 00:00:53.466 [153/707] Linking static target lib/librte_bbdev.a 00:00:53.466 [154/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:00:53.466 [155/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:00:53.733 [156/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.733 [157/707] Linking target lib/librte_kvargs.so.24.0 00:00:53.733 [158/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:00:53.733 [159/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:00:53.733 [160/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:00:53.733 [161/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:00:53.733 [162/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:00:53.733 [163/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:00:53.733 [164/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:00:53.733 [165/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:00:53.733 [166/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:00:53.733 [167/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:00:53.733 [168/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.733 [169/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:00:53.733 [170/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:00:53.733 [171/707] Linking static target lib/librte_jobstats.a 00:00:53.733 [172/707] Linking static target lib/librte_compressdev.a 00:00:53.733 [173/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:00:53.733 [174/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:00:53.733 [175/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:00:53.733 [176/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.733 [177/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:00:53.733 [178/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:00:53.733 [179/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:00:53.733 [180/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:00:53.733 [181/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:00:53.733 [182/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:00:53.733 [183/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:00:53.733 [184/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:00:53.992 [185/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:00:53.992 [186/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:00:53.992 [187/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:00:53.992 [188/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:00:53.992 [189/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:00:53.992 [190/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:00:53.992 [191/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:00:53.992 [192/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:00:53.992 [193/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:00:53.992 [194/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:00:53.992 [195/707] Linking static target lib/librte_telemetry.a 00:00:53.992 [196/707] Linking static target lib/librte_latencystats.a 00:00:53.992 [197/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:00:53.992 [198/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:00:53.992 [199/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:00:53.992 [200/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:00:53.992 [201/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:00:53.992 [202/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:00:53.992 [203/707] Linking static target lib/librte_rcu.a 00:00:53.992 [204/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:00:53.992 [205/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:00:53.992 [206/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:00:53.992 [207/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:00:53.992 [208/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:00:53.992 [209/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:00:53.992 [210/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:00:53.992 [211/707] Linking static target lib/librte_dispatcher.a 00:00:53.992 [212/707] Linking static target lib/librte_eal.a 00:00:53.992 [213/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:00:53.992 [214/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:00:53.992 [215/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:00:53.992 [216/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:00:53.992 [217/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.992 [218/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:00:53.992 [219/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:00:53.992 [220/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:00:53.992 [221/707] Linking static target lib/librte_dmadev.a 00:00:53.992 [222/707] Linking static target lib/librte_stack.a 00:00:53.992 [223/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:00:53.992 [224/707] Linking static target lib/librte_gpudev.a 00:00:53.992 [225/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:00:53.992 [226/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:00:53.992 [227/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:00:53.992 [228/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:00:53.992 [229/707] Linking static target lib/librte_gro.a 00:00:53.992 [230/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:00:53.992 [231/707] Linking static target lib/librte_gso.a 00:00:53.992 [232/707] Linking static target lib/librte_regexdev.a 00:00:54.253 [233/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:00:54.253 [234/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:00:54.253 [235/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:00:54.253 [236/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:00:54.253 [237/707] Linking static target lib/librte_distributor.a 00:00:54.253 [238/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:00:54.253 [239/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:00:54.253 [240/707] Linking static target lib/librte_mbuf.a 00:00:54.253 [241/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:00:54.253 [242/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:00:54.253 [243/707] Linking static target lib/librte_rawdev.a 00:00:54.253 [244/707] Linking static target lib/librte_mldev.a 00:00:54.253 [245/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:00:54.253 [246/707] Linking static target lib/librte_power.a 00:00:54.253 [247/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:00:54.253 [248/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:00:54.253 [249/707] Linking static target lib/librte_ip_frag.a 00:00:54.253 [250/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.253 [251/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:00:54.253 [252/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:00:54.253 [253/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:00:54.253 [254/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.253 [255/707] Linking static target lib/librte_reorder.a 00:00:54.253 [256/707] Linking static target lib/librte_pcapng.a 00:00:54.253 [257/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:00:54.253 [258/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:00:54.253 [259/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:00:54.518 [260/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:00:54.518 [261/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:00:54.518 [262/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:00:54.518 [263/707] Linking static target lib/librte_security.a 00:00:54.518 [264/707] Linking static target lib/librte_bpf.a 00:00:54.518 [265/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.518 [266/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.518 [267/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:00:54.518 [268/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:00:54.518 [269/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:00:54.518 [270/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:00:54.518 [271/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:00:54.518 [272/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.518 [273/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.518 [274/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:00:54.518 [275/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:00:54.518 [276/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.518 [277/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:00:54.518 [278/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:00:54.518 [279/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:00:54.518 [280/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:00:54.518 [281/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.518 [282/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:00:54.518 [283/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.518 [284/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:00:54.518 [285/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.518 [286/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:00:54.782 [287/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.782 [288/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:00:54.782 [289/707] Linking static target lib/librte_lpm.a 00:00:54.782 [290/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:00:54.782 [291/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.782 [292/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:00:54.782 [293/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:00:54.782 [294/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:00:54.782 [295/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.782 [296/707] Linking static target lib/librte_rib.a 00:00:54.782 [297/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.782 [298/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:00:54.782 [299/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:00:54.782 [300/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:00:54.782 [301/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.782 [302/707] Linking target lib/librte_telemetry.so.24.0 00:00:54.782 [303/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:00:54.782 [304/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:00:54.782 [305/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.782 [306/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:00:54.782 [307/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:00:54.782 [308/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:00:54.782 [309/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:00:54.782 [310/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:00:54.782 [311/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.782 [312/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:00:55.047 [313/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.047 [314/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:00:55.047 [315/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:00:55.047 [316/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:00:55.047 [317/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:00:55.047 [318/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:00:55.047 [319/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:00:55.047 [320/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:00:55.047 [321/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:00:55.047 [322/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:00:55.047 [323/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:00:55.047 [324/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:00:55.047 [325/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.047 [326/707] Linking static target lib/librte_efd.a 00:00:55.048 [327/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:00:55.048 [328/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:00:55.048 [329/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:00:55.048 [330/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:00:55.048 [331/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:00:55.048 [332/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:00:55.048 [333/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:00:55.048 [334/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:00:55.048 [335/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:00:55.048 [336/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.048 [337/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.048 [338/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:00:55.308 [339/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:00:55.308 [340/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:00:55.308 [341/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:00:55.308 [342/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:00:55.308 [343/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:00:55.308 [344/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:00:55.308 [345/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:00:55.308 [346/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:00:55.308 [347/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.308 [348/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:00:55.308 [349/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:00:55.308 [350/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:00:55.308 [351/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:00:55.308 [352/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:00:55.308 [353/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.308 [354/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:00:55.308 [355/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:00:55.308 [356/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:00:55.308 [357/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.308 [358/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:00:55.308 [359/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:00:55.308 [360/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.308 [361/707] Linking static target lib/librte_fib.a 00:00:55.308 [362/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:00:55.308 [363/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:00:55.576 [364/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:00:55.576 [365/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:00:55.576 [366/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:00:55.576 [367/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:00:55.576 [368/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:00:55.576 [369/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.576 [370/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:00:55.576 [371/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.576 [372/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:00:55.576 [373/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:00:55.576 [374/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:00:55.576 [375/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:00:55.576 [376/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:00:55.576 [377/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:00:55.576 [378/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:00:55.576 [379/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:00:55.576 [380/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:00:55.576 [381/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:00:55.576 [382/707] Linking static target lib/librte_pdump.a 00:00:55.576 [383/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:00:55.576 [384/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:00:55.839 [385/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:00:55.839 [386/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:00:55.839 [387/707] Linking static target lib/librte_graph.a 00:00:55.839 [388/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:00:55.839 [389/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:00:55.839 [390/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:00:55.839 [391/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:00:55.839 [392/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:00:55.839 [393/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:00:55.839 [394/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:00:55.839 [395/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:00:55.839 [396/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:00:55.839 [397/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:00:55.839 [398/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:00:55.839 [399/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:00:55.839 [400/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:00:55.839 [401/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:00:55.839 [402/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:00:55.839 [403/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:00:55.839 [404/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:00:55.839 [405/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:00:55.839 [406/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:00:55.839 [407/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:00:55.839 [408/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:00:55.839 [409/707] Linking static target drivers/librte_bus_vdev.a 00:00:55.839 [410/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:00:56.104 [411/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:00:56.104 [412/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:00:56.104 [413/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.104 [414/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:00:56.104 [415/707] Linking static target lib/librte_sched.a 00:00:56.104 [416/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:00:56.104 [417/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:00:56.104 [418/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:00:56.104 [419/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:00:56.104 [420/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:00:56.105 [421/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:00:56.105 [422/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:00:56.105 [423/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:00:56.105 [424/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:00:56.105 [425/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:00:56.105 [426/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.105 [427/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:00:56.105 [428/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:00:56.105 [429/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:00:56.105 [430/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:00:56.105 [431/707] Linking static target lib/librte_table.a 00:00:56.105 [432/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:00:56.105 [433/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:00:56.105 [434/707] Linking static target lib/librte_cryptodev.a 00:00:56.105 [435/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:00:56.105 [436/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:00:56.367 [437/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:00:56.367 [438/707] Linking static target drivers/librte_bus_pci.a 00:00:56.367 [439/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:00:56.367 [440/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:00:56.367 [441/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:00:56.367 [442/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:00:56.367 [443/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:00:56.367 [444/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:00:56.367 [445/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:00:56.367 [446/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:00:56.367 [447/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:00:56.367 [448/707] Linking static target lib/librte_ipsec.a 00:00:56.367 [449/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:00:56.367 [450/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:00:56.367 [451/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:00:56.367 [452/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:00:56.367 [453/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.367 [454/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:00:56.367 [455/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.367 [456/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:00:56.367 [457/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:00:56.367 [458/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:00:56.367 [459/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:00:56.367 [460/707] Linking static target lib/librte_member.a 00:00:56.367 [461/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:00:56.367 [462/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:00:56.367 [463/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:00:56.367 [464/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:00:56.367 [465/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:00:56.367 [466/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:00:56.367 [467/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:00:56.627 [468/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:00:56.627 [469/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:00:56.627 [470/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:00:56.627 [471/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:00:56.627 [472/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:00:56.627 [473/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:00:56.627 [474/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:00:56.627 [475/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:00:56.627 [476/707] Linking static target lib/librte_hash.a 00:00:56.627 [477/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:00:56.627 [478/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.627 [479/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:00:56.627 [480/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:00:56.627 [481/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:00:56.627 [482/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:00:56.627 [483/707] Linking static target lib/librte_pdcp.a 00:00:56.627 [484/707] Linking static target lib/librte_node.a 00:00:56.627 [485/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:00:56.627 [486/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:00:56.627 [487/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:00:56.627 [488/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:00:56.627 [489/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:00:56.627 [490/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:00:56.627 [491/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:00:56.627 [492/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:00:56.627 [493/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:00:56.627 [494/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.627 [495/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:00:56.627 [496/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:00:56.627 [497/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:00:56.627 [498/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:00:56.627 [499/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:00:56.627 [500/707] Linking static target drivers/librte_mempool_ring.a 00:00:56.887 [501/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.887 [502/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:00:56.887 [503/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:00:56.887 [504/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:00:56.887 [505/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:00:56.887 [506/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:00:56.887 [507/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:00:56.887 [508/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.887 [509/707] Linking static target lib/librte_port.a 00:00:56.887 [510/707] Linking static target lib/acl/libavx2_tmp.a 00:00:56.887 [511/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:00:56.887 [512/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:00:56.887 [513/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:00:56.887 [514/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:00:56.887 [515/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:00:56.887 [516/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.887 [517/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:00:56.887 [518/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:00:56.887 [519/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:00:56.887 [520/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:00:56.887 [521/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:00:56.887 [522/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:00:56.887 [523/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:00:56.887 [524/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:00:56.887 [525/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:00:56.887 [526/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:00:56.887 [527/707] Linking static target lib/librte_eventdev.a 00:00:56.887 [528/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:00:56.887 [529/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.887 [530/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.887 [531/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:00:57.146 [532/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:00:57.146 [533/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:00:57.146 [534/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:00:57.146 [535/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.146 [536/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:00:57.146 [537/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:00:57.146 [538/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:00:57.146 [539/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:00:57.146 [540/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:00:57.146 [541/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:00:57.146 [542/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:00:57.146 [543/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:00:57.146 [544/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:00:57.146 [545/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:00:57.146 [546/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:00:57.146 [547/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:00:57.146 [548/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:00:57.146 [549/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:00:57.146 [550/707] Linking static target lib/librte_acl.a 00:00:57.405 [551/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:00:57.405 [552/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.405 [553/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:00:57.405 [554/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:00:57.405 [555/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:00:57.405 [556/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:00:57.405 [557/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:00:57.405 [558/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:00:57.405 [559/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:00:57.405 [560/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:00:57.405 [561/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:00:57.405 [562/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:00:57.405 [563/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:00:57.405 [564/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:00:57.405 [565/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:00:57.663 [566/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.663 [567/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:00:57.663 [568/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.663 [569/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:00:57.922 [570/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:00:57.922 [571/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:00:57.922 [572/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:00:57.922 [573/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.922 [574/707] Linking static target lib/librte_ethdev.a 00:00:58.180 [575/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:00:58.180 [576/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:00:58.439 [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:00:58.696 [578/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:00:58.696 [579/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:00:58.954 [580/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:00:59.521 [581/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:00:59.521 [582/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:00:59.521 [583/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:00:59.780 [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:00:59.780 [585/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:00:59.780 [586/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:00:59.780 [587/707] Linking static target drivers/librte_net_i40e.a 00:00:59.780 [588/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:00.717 [589/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:00.717 [590/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.977 [591/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.235 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:06.512 [593/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.512 [594/707] Linking target lib/librte_eal.so.24.0 00:01:06.512 [595/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:06.512 [596/707] Linking target lib/librte_ring.so.24.0 00:01:06.512 [597/707] Linking target lib/librte_pci.so.24.0 00:01:06.512 [598/707] Linking target lib/librte_meter.so.24.0 00:01:06.512 [599/707] Linking target lib/librte_timer.so.24.0 00:01:06.512 [600/707] Linking target lib/librte_jobstats.so.24.0 00:01:06.512 [601/707] Linking target lib/librte_cfgfile.so.24.0 00:01:06.512 [602/707] Linking target drivers/librte_bus_vdev.so.24.0 00:01:06.512 [603/707] Linking target lib/librte_stack.so.24.0 00:01:06.512 [604/707] Linking target lib/librte_dmadev.so.24.0 00:01:06.512 [605/707] Linking target lib/librte_rawdev.so.24.0 00:01:06.512 [606/707] Linking target lib/librte_acl.so.24.0 00:01:06.512 [607/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:06.512 [608/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:06.512 [609/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:06.512 [610/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:06.512 [611/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:06.512 [612/707] Linking target lib/librte_mempool.so.24.0 00:01:06.512 [613/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:06.512 [614/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:06.512 [615/707] Linking target lib/librte_rcu.so.24.0 00:01:06.512 [616/707] Linking target drivers/librte_bus_pci.so.24.0 00:01:06.512 [617/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:06.512 [618/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:06.512 [619/707] Linking target lib/librte_mbuf.so.24.0 00:01:06.512 [620/707] Linking target drivers/librte_mempool_ring.so.24.0 00:01:06.512 [621/707] Linking target lib/librte_rib.so.24.0 00:01:06.512 [622/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:06.512 [623/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:06.512 [624/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:06.513 [625/707] Linking target lib/librte_fib.so.24.0 00:01:06.513 [626/707] Linking target lib/librte_compressdev.so.24.0 00:01:06.513 [627/707] Linking target lib/librte_reorder.so.24.0 00:01:06.513 [628/707] Linking target lib/librte_gpudev.so.24.0 00:01:06.513 [629/707] Linking target lib/librte_regexdev.so.24.0 00:01:06.513 [630/707] Linking target lib/librte_distributor.so.24.0 00:01:06.513 [631/707] Linking target lib/librte_bbdev.so.24.0 00:01:06.513 [632/707] Linking target lib/librte_net.so.24.0 00:01:06.513 [633/707] Linking target lib/librte_cryptodev.so.24.0 00:01:06.513 [634/707] Linking target lib/librte_mldev.so.24.0 00:01:06.513 [635/707] Linking target lib/librte_sched.so.24.0 00:01:06.772 [636/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:06.772 [637/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:06.772 [638/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:06.772 [639/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:06.772 [640/707] Linking target lib/librte_security.so.24.0 00:01:06.772 [641/707] Linking target lib/librte_hash.so.24.0 00:01:06.772 [642/707] Linking target lib/librte_cmdline.so.24.0 00:01:06.772 [643/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.032 [644/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:07.032 [645/707] Linking target lib/librte_ethdev.so.24.0 00:01:07.032 [646/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:07.032 [647/707] Linking target lib/librte_pdcp.so.24.0 00:01:07.032 [648/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:07.032 [649/707] Linking static target lib/librte_pipeline.a 00:01:07.032 [650/707] Linking target lib/librte_lpm.so.24.0 00:01:07.032 [651/707] Linking target lib/librte_ipsec.so.24.0 00:01:07.032 [652/707] Linking target lib/librte_member.so.24.0 00:01:07.032 [653/707] Linking target lib/librte_efd.so.24.0 00:01:07.032 [654/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:07.032 [655/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:07.032 [656/707] Linking target lib/librte_pcapng.so.24.0 00:01:07.291 [657/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:07.291 [658/707] Linking target lib/librte_gso.so.24.0 00:01:07.291 [659/707] Linking target lib/librte_metrics.so.24.0 00:01:07.291 [660/707] Linking target lib/librte_gro.so.24.0 00:01:07.291 [661/707] Linking target lib/librte_ip_frag.so.24.0 00:01:07.291 [662/707] Linking target lib/librte_bpf.so.24.0 00:01:07.291 [663/707] Linking target lib/librte_power.so.24.0 00:01:07.291 [664/707] Linking target lib/librte_eventdev.so.24.0 00:01:07.291 [665/707] Linking target drivers/librte_net_i40e.so.24.0 00:01:07.291 [666/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:07.291 [667/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:07.291 [668/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:07.291 [669/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:07.291 [670/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:07.291 [671/707] Linking target lib/librte_dispatcher.so.24.0 00:01:07.291 [672/707] Linking target lib/librte_pdump.so.24.0 00:01:07.291 [673/707] Linking target lib/librte_latencystats.so.24.0 00:01:07.291 [674/707] Linking target lib/librte_bitratestats.so.24.0 00:01:07.291 [675/707] Linking target lib/librte_graph.so.24.0 00:01:07.291 [676/707] Linking target lib/librte_port.so.24.0 00:01:07.551 [677/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:07.551 [678/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:07.551 [679/707] Linking target lib/librte_table.so.24.0 00:01:07.551 [680/707] Linking target lib/librte_node.so.24.0 00:01:07.551 [681/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:07.811 [682/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:07.811 [683/707] Linking static target lib/librte_vhost.a 00:01:08.070 [684/707] Linking target app/dpdk-test-acl 00:01:08.070 [685/707] Linking target app/dpdk-test-cmdline 00:01:08.070 [686/707] Linking target app/dpdk-dumpcap 00:01:08.070 [687/707] Linking target app/dpdk-pdump 00:01:08.070 [688/707] Linking target app/dpdk-test-crypto-perf 00:01:08.070 [689/707] Linking target app/dpdk-proc-info 00:01:08.070 [690/707] Linking target app/dpdk-test-sad 00:01:08.070 [691/707] Linking target app/dpdk-test-regex 00:01:08.070 [692/707] Linking target app/dpdk-test-dma-perf 00:01:08.070 [693/707] Linking target app/dpdk-graph 00:01:08.070 [694/707] Linking target app/dpdk-test-flow-perf 00:01:08.070 [695/707] Linking target app/dpdk-test-bbdev 00:01:08.070 [696/707] Linking target app/dpdk-test-fib 00:01:08.070 [697/707] Linking target app/dpdk-test-gpudev 00:01:08.329 [698/707] Linking target app/dpdk-test-compress-perf 00:01:08.329 [699/707] Linking target app/dpdk-test-pipeline 00:01:08.329 [700/707] Linking target app/dpdk-test-mldev 00:01:08.329 [701/707] Linking target app/dpdk-test-security-perf 00:01:08.329 [702/707] Linking target app/dpdk-test-eventdev 00:01:08.329 [703/707] Linking target app/dpdk-testpmd 00:01:09.743 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.041 [705/707] Linking target lib/librte_vhost.so.24.0 00:01:12.571 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.571 [707/707] Linking target lib/librte_pipeline.so.24.0 00:01:12.571 22:46:44 -- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j112 install 00:01:12.571 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:12.571 [0/1] Installing files. 00:01:12.834 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:01:12.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.835 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:12.836 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:12.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:12.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:12.840 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.840 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.840 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.840 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.841 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:13.103 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.103 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:13.103 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.104 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:13.104 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.104 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:13.104 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.104 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.104 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.104 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.104 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.104 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.104 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.104 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.104 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.104 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.104 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.104 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.104 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.104 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.104 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.104 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.104 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.104 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.104 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.104 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.104 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.105 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.106 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.107 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:13.108 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:13.108 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:01:13.108 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:01:13.108 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:01:13.108 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:13.109 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:01:13.109 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:13.109 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:01:13.109 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:13.109 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:01:13.109 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:13.109 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:01:13.109 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:13.109 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:01:13.109 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:13.109 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:01:13.109 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:13.109 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:01:13.109 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:01:13.109 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:01:13.109 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:13.109 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:01:13.109 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:13.109 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:01:13.109 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:13.109 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:01:13.109 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:13.109 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:01:13.109 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:13.109 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:01:13.109 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:13.109 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:01:13.109 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:13.109 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:01:13.109 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:13.109 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:01:13.109 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:13.109 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:01:13.109 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:13.109 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:01:13.109 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:13.109 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:01:13.109 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:13.109 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:01:13.109 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:13.109 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:01:13.109 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:13.109 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:01:13.109 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:13.109 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:01:13.109 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:13.109 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:01:13.109 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:13.109 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:01:13.109 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:13.109 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:01:13.109 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:01:13.110 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:01:13.110 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:13.110 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:01:13.110 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:13.110 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:01:13.110 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:13.110 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:01:13.110 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:13.110 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:01:13.110 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:13.110 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:01:13.110 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:13.110 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:01:13.110 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:13.110 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:01:13.110 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:01:13.110 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:01:13.110 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:13.110 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:01:13.110 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:01:13.110 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:01:13.110 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:01:13.110 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:01:13.110 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:01:13.110 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:01:13.110 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:01:13.110 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:01:13.110 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:01:13.110 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:01:13.110 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:01:13.110 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:01:13.110 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:01:13.110 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:01:13.110 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:13.110 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:01:13.110 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:13.110 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:01:13.110 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:01:13.110 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:01:13.110 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:13.110 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:01:13.110 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:13.110 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:01:13.110 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:13.110 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:01:13.110 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:01:13.110 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:01:13.110 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:13.110 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:01:13.110 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:13.110 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:01:13.110 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:13.110 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:01:13.110 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:01:13.110 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:01:13.110 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:13.110 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:01:13.110 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:01:13.110 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:01:13.110 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:13.110 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:01:13.110 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:01:13.110 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:01:13.110 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:13.110 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:01:13.110 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:13.110 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:01:13.110 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:01:13.110 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:01:13.110 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:01:13.110 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:01:13.110 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:01:13.110 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:01:13.110 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:01:13.110 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:01:13.110 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:01:13.110 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:01:13.110 22:46:45 -- common/autobuild_common.sh@192 -- $ uname -s 00:01:13.110 22:46:45 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:13.110 22:46:45 -- common/autobuild_common.sh@203 -- $ cat 00:01:13.111 22:46:45 -- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.111 00:01:13.111 real 0m26.904s 00:01:13.111 user 8m0.045s 00:01:13.111 sys 2m36.421s 00:01:13.111 22:46:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:13.111 22:46:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.111 ************************************ 00:01:13.111 END TEST build_native_dpdk 00:01:13.111 ************************************ 00:01:13.369 22:46:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:13.369 22:46:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:13.369 22:46:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:13.370 22:46:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:13.370 22:46:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:13.370 22:46:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:13.370 22:46:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:13.370 22:46:45 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:01:13.370 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:13.628 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:13.628 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.628 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:13.886 Using 'verbs' RDMA provider 00:01:29.341 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:41.555 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:41.555 Creating mk/config.mk...done. 00:01:41.555 Creating mk/cc.flags.mk...done. 00:01:41.555 Type 'make' to build. 00:01:41.555 22:47:13 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:41.555 22:47:13 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:41.555 22:47:13 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:41.555 22:47:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.555 ************************************ 00:01:41.555 START TEST make 00:01:41.555 ************************************ 00:01:41.555 22:47:13 -- common/autotest_common.sh@1104 -- $ make -j112 00:01:41.555 make[1]: Nothing to be done for 'all'. 00:01:42.546 The Meson build system 00:01:42.546 Version: 1.3.1 00:01:42.546 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:42.546 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:42.546 Build type: native build 00:01:42.546 Project name: libvfio-user 00:01:42.546 Project version: 0.0.1 00:01:42.546 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:42.546 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:42.546 Host machine cpu family: x86_64 00:01:42.546 Host machine cpu: x86_64 00:01:42.546 Run-time dependency threads found: YES 00:01:42.546 Library dl found: YES 00:01:42.546 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:42.546 Run-time dependency json-c found: YES 0.17 00:01:42.546 Run-time dependency cmocka found: YES 1.1.7 00:01:42.546 Program pytest-3 found: NO 00:01:42.546 Program flake8 found: NO 00:01:42.546 Program misspell-fixer found: NO 00:01:42.546 Program restructuredtext-lint found: NO 00:01:42.546 Program valgrind found: YES (/usr/bin/valgrind) 00:01:42.546 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:42.546 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:42.546 Compiler for C supports arguments -Wwrite-strings: YES 00:01:42.546 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:42.546 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:42.546 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:42.546 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:42.546 Build targets in project: 8 00:01:42.546 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:42.546 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:42.546 00:01:42.546 libvfio-user 0.0.1 00:01:42.546 00:01:42.546 User defined options 00:01:42.546 buildtype : debug 00:01:42.546 default_library: shared 00:01:42.546 libdir : /usr/local/lib 00:01:42.546 00:01:42.546 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:43.114 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:43.114 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:43.114 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:43.114 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:43.114 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:43.114 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:43.114 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:43.114 [7/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:43.114 [8/37] Compiling C object samples/null.p/null.c.o 00:01:43.114 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:43.114 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:43.114 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:43.114 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:43.114 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:43.114 [14/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:43.114 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:43.114 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:43.114 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:43.114 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:43.114 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:43.114 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:43.114 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:43.114 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:43.114 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:43.114 [24/37] Compiling C object samples/server.p/server.c.o 00:01:43.114 [25/37] Compiling C object samples/client.p/client.c.o 00:01:43.114 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:43.114 [27/37] Linking target samples/client 00:01:43.372 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:43.372 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:43.372 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:43.372 [31/37] Linking target test/unit_tests 00:01:43.372 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:43.372 [33/37] Linking target samples/shadow_ioeventfd_server 00:01:43.372 [34/37] Linking target samples/server 00:01:43.372 [35/37] Linking target samples/gpio-pci-idio-16 00:01:43.372 [36/37] Linking target samples/null 00:01:43.372 [37/37] Linking target samples/lspci 00:01:43.372 INFO: autodetecting backend as ninja 00:01:43.372 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:43.632 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:43.892 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:43.892 ninja: no work to do. 00:01:52.012 CC lib/log/log.o 00:01:52.012 CC lib/log/log_deprecated.o 00:01:52.012 CC lib/log/log_flags.o 00:01:52.012 CC lib/ut_mock/mock.o 00:01:52.012 CC lib/ut/ut.o 00:01:52.012 LIB libspdk_log.a 00:01:52.012 LIB libspdk_ut_mock.a 00:01:52.012 LIB libspdk_ut.a 00:01:52.012 SO libspdk_log.so.6.1 00:01:52.012 SO libspdk_ut_mock.so.5.0 00:01:52.012 SO libspdk_ut.so.1.0 00:01:52.012 SYMLINK libspdk_ut_mock.so 00:01:52.012 SYMLINK libspdk_log.so 00:01:52.012 SYMLINK libspdk_ut.so 00:01:52.012 CC lib/util/bit_array.o 00:01:52.012 CC lib/util/base64.o 00:01:52.012 CC lib/util/cpuset.o 00:01:52.012 CC lib/util/crc16.o 00:01:52.012 CXX lib/trace_parser/trace.o 00:01:52.012 CC lib/util/crc32.o 00:01:52.012 CC lib/util/crc32c.o 00:01:52.012 CC lib/util/crc64.o 00:01:52.012 CC lib/util/crc32_ieee.o 00:01:52.012 CC lib/util/dif.o 00:01:52.012 CC lib/util/fd.o 00:01:52.012 CC lib/util/file.o 00:01:52.012 CC lib/util/hexlify.o 00:01:52.012 CC lib/util/iov.o 00:01:52.012 CC lib/util/math.o 00:01:52.012 CC lib/util/pipe.o 00:01:52.012 CC lib/util/strerror_tls.o 00:01:52.012 CC lib/util/string.o 00:01:52.012 CC lib/dma/dma.o 00:01:52.012 CC lib/util/uuid.o 00:01:52.012 CC lib/util/fd_group.o 00:01:52.012 CC lib/util/xor.o 00:01:52.012 CC lib/ioat/ioat.o 00:01:52.012 CC lib/util/zipf.o 00:01:52.012 CC lib/vfio_user/host/vfio_user_pci.o 00:01:52.012 CC lib/vfio_user/host/vfio_user.o 00:01:52.012 LIB libspdk_dma.a 00:01:52.012 SO libspdk_dma.so.3.0 00:01:52.012 LIB libspdk_ioat.a 00:01:52.012 SYMLINK libspdk_dma.so 00:01:52.012 SO libspdk_ioat.so.6.0 00:01:52.012 LIB libspdk_vfio_user.a 00:01:52.012 SYMLINK libspdk_ioat.so 00:01:52.012 SO libspdk_vfio_user.so.4.0 00:01:52.012 LIB libspdk_util.a 00:01:52.012 SYMLINK libspdk_vfio_user.so 00:01:52.012 SO libspdk_util.so.8.0 00:01:52.012 SYMLINK libspdk_util.so 00:01:52.012 LIB libspdk_trace_parser.a 00:01:52.269 SO libspdk_trace_parser.so.4.0 00:01:52.269 SYMLINK libspdk_trace_parser.so 00:01:52.269 CC lib/json/json_util.o 00:01:52.269 CC lib/json/json_parse.o 00:01:52.269 CC lib/json/json_write.o 00:01:52.269 CC lib/conf/conf.o 00:01:52.269 CC lib/rdma/rdma_verbs.o 00:01:52.269 CC lib/rdma/common.o 00:01:52.269 CC lib/idxd/idxd.o 00:01:52.269 CC lib/idxd/idxd_user.o 00:01:52.269 CC lib/idxd/idxd_kernel.o 00:01:52.269 CC lib/vmd/vmd.o 00:01:52.269 CC lib/env_dpdk/env.o 00:01:52.269 CC lib/vmd/led.o 00:01:52.269 CC lib/env_dpdk/init.o 00:01:52.269 CC lib/env_dpdk/memory.o 00:01:52.269 CC lib/env_dpdk/pci.o 00:01:52.269 CC lib/env_dpdk/threads.o 00:01:52.269 CC lib/env_dpdk/pci_ioat.o 00:01:52.269 CC lib/env_dpdk/pci_virtio.o 00:01:52.270 CC lib/env_dpdk/pci_vmd.o 00:01:52.270 CC lib/env_dpdk/pci_idxd.o 00:01:52.270 CC lib/env_dpdk/pci_event.o 00:01:52.270 CC lib/env_dpdk/sigbus_handler.o 00:01:52.270 CC lib/env_dpdk/pci_dpdk.o 00:01:52.270 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:52.270 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:52.527 LIB libspdk_conf.a 00:01:52.527 LIB libspdk_json.a 00:01:52.527 SO libspdk_conf.so.5.0 00:01:52.527 LIB libspdk_rdma.a 00:01:52.527 SO libspdk_json.so.5.1 00:01:52.785 SO libspdk_rdma.so.5.0 00:01:52.785 SYMLINK libspdk_conf.so 00:01:52.785 SYMLINK libspdk_json.so 00:01:52.785 SYMLINK libspdk_rdma.so 00:01:52.785 LIB libspdk_idxd.a 00:01:52.785 SO libspdk_idxd.so.11.0 00:01:52.785 LIB libspdk_vmd.a 00:01:52.785 SYMLINK libspdk_idxd.so 00:01:52.785 SO libspdk_vmd.so.5.0 00:01:53.042 CC lib/jsonrpc/jsonrpc_server.o 00:01:53.042 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:53.042 CC lib/jsonrpc/jsonrpc_client.o 00:01:53.043 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:53.043 SYMLINK libspdk_vmd.so 00:01:53.043 LIB libspdk_jsonrpc.a 00:01:53.300 SO libspdk_jsonrpc.so.5.1 00:01:53.300 SYMLINK libspdk_jsonrpc.so 00:01:53.300 LIB libspdk_env_dpdk.a 00:01:53.558 SO libspdk_env_dpdk.so.13.0 00:01:53.558 CC lib/rpc/rpc.o 00:01:53.558 SYMLINK libspdk_env_dpdk.so 00:01:53.558 LIB libspdk_rpc.a 00:01:53.816 SO libspdk_rpc.so.5.0 00:01:53.816 SYMLINK libspdk_rpc.so 00:01:54.074 CC lib/trace/trace.o 00:01:54.074 CC lib/trace/trace_flags.o 00:01:54.074 CC lib/trace/trace_rpc.o 00:01:54.074 CC lib/notify/notify.o 00:01:54.074 CC lib/notify/notify_rpc.o 00:01:54.074 CC lib/sock/sock.o 00:01:54.074 CC lib/sock/sock_rpc.o 00:01:54.074 LIB libspdk_notify.a 00:01:54.074 LIB libspdk_trace.a 00:01:54.074 SO libspdk_notify.so.5.0 00:01:54.332 SO libspdk_trace.so.9.0 00:01:54.332 SYMLINK libspdk_notify.so 00:01:54.332 SYMLINK libspdk_trace.so 00:01:54.332 LIB libspdk_sock.a 00:01:54.332 SO libspdk_sock.so.8.0 00:01:54.332 SYMLINK libspdk_sock.so 00:01:54.590 CC lib/thread/thread.o 00:01:54.590 CC lib/thread/iobuf.o 00:01:54.590 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:54.590 CC lib/nvme/nvme_ctrlr.o 00:01:54.590 CC lib/nvme/nvme_fabric.o 00:01:54.590 CC lib/nvme/nvme_ns_cmd.o 00:01:54.590 CC lib/nvme/nvme_pcie.o 00:01:54.590 CC lib/nvme/nvme_ns.o 00:01:54.590 CC lib/nvme/nvme_pcie_common.o 00:01:54.590 CC lib/nvme/nvme.o 00:01:54.590 CC lib/nvme/nvme_qpair.o 00:01:54.590 CC lib/nvme/nvme_discovery.o 00:01:54.590 CC lib/nvme/nvme_quirks.o 00:01:54.590 CC lib/nvme/nvme_transport.o 00:01:54.590 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:54.590 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:54.590 CC lib/nvme/nvme_tcp.o 00:01:54.590 CC lib/nvme/nvme_opal.o 00:01:54.590 CC lib/nvme/nvme_io_msg.o 00:01:54.590 CC lib/nvme/nvme_poll_group.o 00:01:54.590 CC lib/nvme/nvme_zns.o 00:01:54.590 CC lib/nvme/nvme_cuse.o 00:01:54.590 CC lib/nvme/nvme_vfio_user.o 00:01:54.590 CC lib/nvme/nvme_rdma.o 00:01:55.525 LIB libspdk_thread.a 00:01:55.525 SO libspdk_thread.so.9.0 00:01:55.783 SYMLINK libspdk_thread.so 00:01:55.783 CC lib/accel/accel.o 00:01:55.783 CC lib/accel/accel_rpc.o 00:01:55.783 CC lib/accel/accel_sw.o 00:01:56.041 CC lib/virtio/virtio.o 00:01:56.042 CC lib/virtio/virtio_vhost_user.o 00:01:56.042 CC lib/virtio/virtio_pci.o 00:01:56.042 CC lib/virtio/virtio_vfio_user.o 00:01:56.042 CC lib/blob/zeroes.o 00:01:56.042 CC lib/blob/blobstore.o 00:01:56.042 CC lib/blob/request.o 00:01:56.042 CC lib/blob/blob_bs_dev.o 00:01:56.042 CC lib/vfu_tgt/tgt_rpc.o 00:01:56.042 CC lib/vfu_tgt/tgt_endpoint.o 00:01:56.042 CC lib/init/subsystem_rpc.o 00:01:56.042 CC lib/init/json_config.o 00:01:56.042 CC lib/init/subsystem.o 00:01:56.042 CC lib/init/rpc.o 00:01:56.042 LIB libspdk_nvme.a 00:01:56.042 LIB libspdk_init.a 00:01:56.301 SO libspdk_init.so.4.0 00:01:56.301 LIB libspdk_virtio.a 00:01:56.301 LIB libspdk_vfu_tgt.a 00:01:56.301 SO libspdk_virtio.so.6.0 00:01:56.301 SYMLINK libspdk_init.so 00:01:56.301 SO libspdk_vfu_tgt.so.2.0 00:01:56.301 SO libspdk_nvme.so.12.0 00:01:56.301 SYMLINK libspdk_virtio.so 00:01:56.301 SYMLINK libspdk_vfu_tgt.so 00:01:56.560 CC lib/event/app.o 00:01:56.560 CC lib/event/reactor.o 00:01:56.560 CC lib/event/app_rpc.o 00:01:56.560 CC lib/event/log_rpc.o 00:01:56.560 CC lib/event/scheduler_static.o 00:01:56.560 SYMLINK libspdk_nvme.so 00:01:56.560 LIB libspdk_accel.a 00:01:56.560 SO libspdk_accel.so.14.0 00:01:56.818 SYMLINK libspdk_accel.so 00:01:56.818 LIB libspdk_event.a 00:01:56.818 SO libspdk_event.so.12.0 00:01:56.818 SYMLINK libspdk_event.so 00:01:57.077 CC lib/bdev/bdev.o 00:01:57.077 CC lib/bdev/part.o 00:01:57.077 CC lib/bdev/bdev_rpc.o 00:01:57.077 CC lib/bdev/bdev_zone.o 00:01:57.077 CC lib/bdev/scsi_nvme.o 00:01:58.014 LIB libspdk_blob.a 00:01:58.014 SO libspdk_blob.so.10.1 00:01:58.014 SYMLINK libspdk_blob.so 00:01:58.273 CC lib/blobfs/blobfs.o 00:01:58.273 CC lib/blobfs/tree.o 00:01:58.273 CC lib/lvol/lvol.o 00:01:58.533 LIB libspdk_bdev.a 00:01:58.792 SO libspdk_bdev.so.14.0 00:01:58.792 LIB libspdk_blobfs.a 00:01:58.792 SO libspdk_blobfs.so.9.0 00:01:58.792 SYMLINK libspdk_bdev.so 00:01:58.792 LIB libspdk_lvol.a 00:01:58.792 SO libspdk_lvol.so.9.1 00:01:58.792 SYMLINK libspdk_blobfs.so 00:01:58.792 SYMLINK libspdk_lvol.so 00:01:59.052 CC lib/scsi/dev.o 00:01:59.052 CC lib/scsi/lun.o 00:01:59.052 CC lib/scsi/port.o 00:01:59.052 CC lib/scsi/scsi_bdev.o 00:01:59.052 CC lib/scsi/scsi.o 00:01:59.052 CC lib/scsi/scsi_rpc.o 00:01:59.052 CC lib/scsi/scsi_pr.o 00:01:59.052 CC lib/scsi/task.o 00:01:59.052 CC lib/ftl/ftl_init.o 00:01:59.052 CC lib/ftl/ftl_layout.o 00:01:59.052 CC lib/ftl/ftl_core.o 00:01:59.052 CC lib/ftl/ftl_sb.o 00:01:59.052 CC lib/ftl/ftl_debug.o 00:01:59.052 CC lib/ftl/ftl_io.o 00:01:59.052 CC lib/ftl/ftl_l2p.o 00:01:59.052 CC lib/ftl/ftl_l2p_flat.o 00:01:59.052 CC lib/ftl/ftl_nv_cache.o 00:01:59.052 CC lib/ftl/ftl_writer.o 00:01:59.052 CC lib/ftl/ftl_band.o 00:01:59.052 CC lib/ftl/ftl_band_ops.o 00:01:59.052 CC lib/ftl/ftl_rq.o 00:01:59.052 CC lib/ftl/ftl_reloc.o 00:01:59.052 CC lib/ftl/ftl_l2p_cache.o 00:01:59.052 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:59.052 CC lib/ftl/ftl_p2l.o 00:01:59.052 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:59.052 CC lib/ftl/mngt/ftl_mngt.o 00:01:59.052 CC lib/nvmf/ctrlr.o 00:01:59.052 CC lib/nvmf/ctrlr_discovery.o 00:01:59.052 CC lib/nvmf/ctrlr_bdev.o 00:01:59.052 CC lib/ublk/ublk_rpc.o 00:01:59.052 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:59.052 CC lib/ublk/ublk.o 00:01:59.052 CC lib/nbd/nbd.o 00:01:59.052 CC lib/nvmf/subsystem.o 00:01:59.052 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:59.052 CC lib/nvmf/nvmf.o 00:01:59.052 CC lib/nbd/nbd_rpc.o 00:01:59.052 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:59.052 CC lib/nvmf/nvmf_rpc.o 00:01:59.052 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:59.052 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:59.052 CC lib/nvmf/transport.o 00:01:59.052 CC lib/nvmf/tcp.o 00:01:59.052 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:59.052 CC lib/nvmf/vfio_user.o 00:01:59.052 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:59.052 CC lib/nvmf/rdma.o 00:01:59.052 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:59.052 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:59.052 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:59.052 CC lib/ftl/utils/ftl_conf.o 00:01:59.052 CC lib/ftl/utils/ftl_md.o 00:01:59.052 CC lib/ftl/utils/ftl_mempool.o 00:01:59.052 CC lib/ftl/utils/ftl_bitmap.o 00:01:59.052 CC lib/ftl/utils/ftl_property.o 00:01:59.052 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:59.052 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:59.052 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:59.052 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:59.052 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:59.052 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:59.052 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:59.052 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:59.052 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:59.052 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:59.052 CC lib/ftl/base/ftl_base_dev.o 00:01:59.052 CC lib/ftl/base/ftl_base_bdev.o 00:01:59.052 CC lib/ftl/ftl_trace.o 00:01:59.619 LIB libspdk_nbd.a 00:01:59.619 SO libspdk_nbd.so.6.0 00:01:59.619 LIB libspdk_scsi.a 00:01:59.619 SYMLINK libspdk_nbd.so 00:01:59.619 SO libspdk_scsi.so.8.0 00:01:59.619 LIB libspdk_ublk.a 00:01:59.619 SYMLINK libspdk_scsi.so 00:01:59.619 SO libspdk_ublk.so.2.0 00:01:59.876 SYMLINK libspdk_ublk.so 00:01:59.876 LIB libspdk_ftl.a 00:01:59.876 CC lib/iscsi/init_grp.o 00:01:59.876 CC lib/iscsi/conn.o 00:01:59.876 CC lib/iscsi/md5.o 00:01:59.876 CC lib/iscsi/param.o 00:01:59.876 CC lib/iscsi/iscsi.o 00:01:59.876 CC lib/iscsi/iscsi_subsystem.o 00:01:59.876 CC lib/iscsi/tgt_node.o 00:01:59.876 CC lib/iscsi/portal_grp.o 00:01:59.876 CC lib/iscsi/iscsi_rpc.o 00:01:59.876 CC lib/iscsi/task.o 00:01:59.876 CC lib/vhost/vhost.o 00:01:59.876 CC lib/vhost/vhost_rpc.o 00:01:59.876 CC lib/vhost/vhost_scsi.o 00:01:59.876 CC lib/vhost/vhost_blk.o 00:01:59.876 CC lib/vhost/rte_vhost_user.o 00:02:00.134 SO libspdk_ftl.so.8.0 00:02:00.392 SYMLINK libspdk_ftl.so 00:02:00.650 LIB libspdk_nvmf.a 00:02:00.650 SO libspdk_nvmf.so.17.0 00:02:00.650 LIB libspdk_vhost.a 00:02:00.650 SO libspdk_vhost.so.7.1 00:02:00.930 SYMLINK libspdk_nvmf.so 00:02:00.930 SYMLINK libspdk_vhost.so 00:02:00.930 LIB libspdk_iscsi.a 00:02:00.930 SO libspdk_iscsi.so.7.0 00:02:01.200 SYMLINK libspdk_iscsi.so 00:02:01.459 CC module/env_dpdk/env_dpdk_rpc.o 00:02:01.459 CC module/vfu_device/vfu_virtio.o 00:02:01.459 CC module/vfu_device/vfu_virtio_blk.o 00:02:01.459 CC module/vfu_device/vfu_virtio_scsi.o 00:02:01.459 CC module/vfu_device/vfu_virtio_rpc.o 00:02:01.459 CC module/sock/posix/posix.o 00:02:01.459 LIB libspdk_env_dpdk_rpc.a 00:02:01.459 CC module/blob/bdev/blob_bdev.o 00:02:01.719 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:01.719 CC module/accel/iaa/accel_iaa.o 00:02:01.719 CC module/accel/iaa/accel_iaa_rpc.o 00:02:01.719 CC module/scheduler/gscheduler/gscheduler.o 00:02:01.719 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:01.719 CC module/accel/error/accel_error.o 00:02:01.719 CC module/accel/ioat/accel_ioat.o 00:02:01.719 CC module/accel/error/accel_error_rpc.o 00:02:01.719 CC module/accel/ioat/accel_ioat_rpc.o 00:02:01.719 CC module/accel/dsa/accel_dsa.o 00:02:01.719 CC module/accel/dsa/accel_dsa_rpc.o 00:02:01.719 SO libspdk_env_dpdk_rpc.so.5.0 00:02:01.719 SYMLINK libspdk_env_dpdk_rpc.so 00:02:01.719 LIB libspdk_scheduler_dpdk_governor.a 00:02:01.719 LIB libspdk_scheduler_gscheduler.a 00:02:01.719 LIB libspdk_scheduler_dynamic.a 00:02:01.719 SO libspdk_scheduler_gscheduler.so.3.0 00:02:01.719 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:01.719 LIB libspdk_accel_error.a 00:02:01.719 LIB libspdk_accel_ioat.a 00:02:01.719 LIB libspdk_accel_iaa.a 00:02:01.719 SO libspdk_scheduler_dynamic.so.3.0 00:02:01.719 SO libspdk_accel_ioat.so.5.0 00:02:01.719 SO libspdk_accel_error.so.1.0 00:02:01.719 SO libspdk_accel_iaa.so.2.0 00:02:01.719 LIB libspdk_blob_bdev.a 00:02:01.719 LIB libspdk_accel_dsa.a 00:02:01.719 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:01.719 SYMLINK libspdk_scheduler_gscheduler.so 00:02:01.978 SO libspdk_blob_bdev.so.10.1 00:02:01.978 SO libspdk_accel_dsa.so.4.0 00:02:01.978 SYMLINK libspdk_scheduler_dynamic.so 00:02:01.978 SYMLINK libspdk_accel_ioat.so 00:02:01.978 SYMLINK libspdk_accel_iaa.so 00:02:01.978 SYMLINK libspdk_accel_error.so 00:02:01.978 SYMLINK libspdk_blob_bdev.so 00:02:01.978 SYMLINK libspdk_accel_dsa.so 00:02:01.978 LIB libspdk_vfu_device.a 00:02:01.978 SO libspdk_vfu_device.so.2.0 00:02:01.978 SYMLINK libspdk_vfu_device.so 00:02:01.978 LIB libspdk_sock_posix.a 00:02:02.237 SO libspdk_sock_posix.so.5.0 00:02:02.237 SYMLINK libspdk_sock_posix.so 00:02:02.237 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:02.237 CC module/bdev/passthru/vbdev_passthru.o 00:02:02.237 CC module/bdev/gpt/gpt.o 00:02:02.237 CC module/bdev/gpt/vbdev_gpt.o 00:02:02.237 CC module/bdev/null/bdev_null.o 00:02:02.237 CC module/bdev/null/bdev_null_rpc.o 00:02:02.237 CC module/bdev/nvme/bdev_nvme.o 00:02:02.237 CC module/bdev/error/vbdev_error.o 00:02:02.237 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:02.237 CC module/bdev/nvme/nvme_rpc.o 00:02:02.237 CC module/bdev/error/vbdev_error_rpc.o 00:02:02.237 CC module/blobfs/bdev/blobfs_bdev.o 00:02:02.237 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:02.237 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:02.237 CC module/bdev/nvme/bdev_mdns_client.o 00:02:02.237 CC module/bdev/nvme/vbdev_opal.o 00:02:02.237 CC module/bdev/lvol/vbdev_lvol.o 00:02:02.237 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:02.237 CC module/bdev/split/vbdev_split.o 00:02:02.237 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:02.237 CC module/bdev/ftl/bdev_ftl.o 00:02:02.237 CC module/bdev/split/vbdev_split_rpc.o 00:02:02.237 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:02.237 CC module/bdev/delay/vbdev_delay.o 00:02:02.237 CC module/bdev/aio/bdev_aio_rpc.o 00:02:02.237 CC module/bdev/aio/bdev_aio.o 00:02:02.237 CC module/bdev/malloc/bdev_malloc.o 00:02:02.237 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:02.237 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:02.237 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:02.237 CC module/bdev/raid/bdev_raid_rpc.o 00:02:02.237 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:02.237 CC module/bdev/raid/bdev_raid.o 00:02:02.237 CC module/bdev/iscsi/bdev_iscsi.o 00:02:02.237 CC module/bdev/raid/raid0.o 00:02:02.237 CC module/bdev/raid/bdev_raid_sb.o 00:02:02.237 CC module/bdev/raid/raid1.o 00:02:02.237 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:02.237 CC module/bdev/raid/concat.o 00:02:02.237 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:02.237 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:02.237 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:02.496 LIB libspdk_blobfs_bdev.a 00:02:02.496 LIB libspdk_bdev_null.a 00:02:02.496 SO libspdk_blobfs_bdev.so.5.0 00:02:02.496 LIB libspdk_bdev_gpt.a 00:02:02.496 LIB libspdk_bdev_split.a 00:02:02.496 LIB libspdk_bdev_error.a 00:02:02.496 SO libspdk_bdev_null.so.5.0 00:02:02.496 LIB libspdk_bdev_passthru.a 00:02:02.496 SO libspdk_bdev_error.so.5.0 00:02:02.496 SO libspdk_bdev_gpt.so.5.0 00:02:02.496 LIB libspdk_bdev_aio.a 00:02:02.754 SO libspdk_bdev_split.so.5.0 00:02:02.754 SYMLINK libspdk_blobfs_bdev.so 00:02:02.754 LIB libspdk_bdev_ftl.a 00:02:02.754 SO libspdk_bdev_passthru.so.5.0 00:02:02.754 SO libspdk_bdev_aio.so.5.0 00:02:02.754 SYMLINK libspdk_bdev_null.so 00:02:02.754 LIB libspdk_bdev_zone_block.a 00:02:02.754 SO libspdk_bdev_ftl.so.5.0 00:02:02.754 SYMLINK libspdk_bdev_error.so 00:02:02.754 LIB libspdk_bdev_iscsi.a 00:02:02.754 LIB libspdk_bdev_malloc.a 00:02:02.754 SYMLINK libspdk_bdev_split.so 00:02:02.754 LIB libspdk_bdev_delay.a 00:02:02.754 SYMLINK libspdk_bdev_gpt.so 00:02:02.754 SO libspdk_bdev_zone_block.so.5.0 00:02:02.754 SYMLINK libspdk_bdev_aio.so 00:02:02.754 SO libspdk_bdev_malloc.so.5.0 00:02:02.754 SYMLINK libspdk_bdev_passthru.so 00:02:02.754 SO libspdk_bdev_iscsi.so.5.0 00:02:02.754 SO libspdk_bdev_delay.so.5.0 00:02:02.754 SYMLINK libspdk_bdev_ftl.so 00:02:02.754 SYMLINK libspdk_bdev_malloc.so 00:02:02.754 LIB libspdk_bdev_lvol.a 00:02:02.754 SYMLINK libspdk_bdev_zone_block.so 00:02:02.754 SYMLINK libspdk_bdev_iscsi.so 00:02:02.754 SYMLINK libspdk_bdev_delay.so 00:02:02.754 LIB libspdk_bdev_virtio.a 00:02:02.754 SO libspdk_bdev_lvol.so.5.0 00:02:02.754 SO libspdk_bdev_virtio.so.5.0 00:02:03.013 SYMLINK libspdk_bdev_lvol.so 00:02:03.013 SYMLINK libspdk_bdev_virtio.so 00:02:03.013 LIB libspdk_bdev_raid.a 00:02:03.013 SO libspdk_bdev_raid.so.5.0 00:02:03.272 SYMLINK libspdk_bdev_raid.so 00:02:03.840 LIB libspdk_bdev_nvme.a 00:02:03.840 SO libspdk_bdev_nvme.so.6.0 00:02:04.099 SYMLINK libspdk_bdev_nvme.so 00:02:04.357 CC module/event/subsystems/vmd/vmd.o 00:02:04.357 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:04.357 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:04.616 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:04.616 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:04.616 CC module/event/subsystems/iobuf/iobuf.o 00:02:04.616 CC module/event/subsystems/sock/sock.o 00:02:04.616 CC module/event/subsystems/scheduler/scheduler.o 00:02:04.616 LIB libspdk_event_vmd.a 00:02:04.616 LIB libspdk_event_vhost_blk.a 00:02:04.616 SO libspdk_event_vhost_blk.so.2.0 00:02:04.616 LIB libspdk_event_vfu_tgt.a 00:02:04.616 SO libspdk_event_vmd.so.5.0 00:02:04.616 LIB libspdk_event_sock.a 00:02:04.616 LIB libspdk_event_iobuf.a 00:02:04.616 LIB libspdk_event_scheduler.a 00:02:04.616 SO libspdk_event_vfu_tgt.so.2.0 00:02:04.616 SO libspdk_event_sock.so.4.0 00:02:04.616 SYMLINK libspdk_event_vmd.so 00:02:04.616 SO libspdk_event_scheduler.so.3.0 00:02:04.616 SO libspdk_event_iobuf.so.2.0 00:02:04.616 SYMLINK libspdk_event_vhost_blk.so 00:02:04.616 SYMLINK libspdk_event_vfu_tgt.so 00:02:04.874 SYMLINK libspdk_event_sock.so 00:02:04.874 SYMLINK libspdk_event_scheduler.so 00:02:04.874 SYMLINK libspdk_event_iobuf.so 00:02:05.133 CC module/event/subsystems/accel/accel.o 00:02:05.133 LIB libspdk_event_accel.a 00:02:05.133 SO libspdk_event_accel.so.5.0 00:02:05.392 SYMLINK libspdk_event_accel.so 00:02:05.650 CC module/event/subsystems/bdev/bdev.o 00:02:05.650 LIB libspdk_event_bdev.a 00:02:05.650 SO libspdk_event_bdev.so.5.0 00:02:05.909 SYMLINK libspdk_event_bdev.so 00:02:05.909 CC module/event/subsystems/nbd/nbd.o 00:02:05.909 CC module/event/subsystems/ublk/ublk.o 00:02:05.909 CC module/event/subsystems/scsi/scsi.o 00:02:06.167 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:06.167 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:06.167 LIB libspdk_event_nbd.a 00:02:06.167 LIB libspdk_event_ublk.a 00:02:06.167 SO libspdk_event_nbd.so.5.0 00:02:06.167 LIB libspdk_event_scsi.a 00:02:06.167 SO libspdk_event_ublk.so.2.0 00:02:06.167 SO libspdk_event_scsi.so.5.0 00:02:06.167 SYMLINK libspdk_event_nbd.so 00:02:06.167 LIB libspdk_event_nvmf.a 00:02:06.167 SYMLINK libspdk_event_scsi.so 00:02:06.167 SYMLINK libspdk_event_ublk.so 00:02:06.167 SO libspdk_event_nvmf.so.5.0 00:02:06.425 SYMLINK libspdk_event_nvmf.so 00:02:06.425 CC module/event/subsystems/iscsi/iscsi.o 00:02:06.425 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:06.684 LIB libspdk_event_vhost_scsi.a 00:02:06.684 LIB libspdk_event_iscsi.a 00:02:06.684 SO libspdk_event_vhost_scsi.so.2.0 00:02:06.684 SO libspdk_event_iscsi.so.5.0 00:02:06.684 SYMLINK libspdk_event_vhost_scsi.so 00:02:06.684 SYMLINK libspdk_event_iscsi.so 00:02:06.942 SO libspdk.so.5.0 00:02:06.942 SYMLINK libspdk.so 00:02:07.204 CC app/spdk_lspci/spdk_lspci.o 00:02:07.204 CXX app/trace/trace.o 00:02:07.204 CC app/spdk_top/spdk_top.o 00:02:07.204 CC app/trace_record/trace_record.o 00:02:07.204 CC app/spdk_nvme_perf/perf.o 00:02:07.204 CC app/spdk_nvme_identify/identify.o 00:02:07.204 TEST_HEADER include/spdk/accel.h 00:02:07.204 TEST_HEADER include/spdk/accel_module.h 00:02:07.204 TEST_HEADER include/spdk/assert.h 00:02:07.204 TEST_HEADER include/spdk/barrier.h 00:02:07.204 CC app/spdk_nvme_discover/discovery_aer.o 00:02:07.204 TEST_HEADER include/spdk/base64.h 00:02:07.204 TEST_HEADER include/spdk/bdev.h 00:02:07.204 TEST_HEADER include/spdk/bdev_module.h 00:02:07.204 TEST_HEADER include/spdk/bit_array.h 00:02:07.204 TEST_HEADER include/spdk/bdev_zone.h 00:02:07.204 TEST_HEADER include/spdk/bit_pool.h 00:02:07.204 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:07.204 TEST_HEADER include/spdk/blob_bdev.h 00:02:07.204 CC app/spdk_dd/spdk_dd.o 00:02:07.204 TEST_HEADER include/spdk/blobfs.h 00:02:07.204 TEST_HEADER include/spdk/blob.h 00:02:07.204 TEST_HEADER include/spdk/conf.h 00:02:07.204 TEST_HEADER include/spdk/config.h 00:02:07.204 TEST_HEADER include/spdk/cpuset.h 00:02:07.204 TEST_HEADER include/spdk/crc16.h 00:02:07.204 CC test/rpc_client/rpc_client_test.o 00:02:07.204 TEST_HEADER include/spdk/crc32.h 00:02:07.204 TEST_HEADER include/spdk/dif.h 00:02:07.204 TEST_HEADER include/spdk/crc64.h 00:02:07.204 TEST_HEADER include/spdk/dma.h 00:02:07.204 TEST_HEADER include/spdk/endian.h 00:02:07.204 CC app/nvmf_tgt/nvmf_main.o 00:02:07.204 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:07.204 TEST_HEADER include/spdk/event.h 00:02:07.204 TEST_HEADER include/spdk/env_dpdk.h 00:02:07.204 TEST_HEADER include/spdk/env.h 00:02:07.204 TEST_HEADER include/spdk/fd_group.h 00:02:07.204 CC app/vhost/vhost.o 00:02:07.204 TEST_HEADER include/spdk/fd.h 00:02:07.204 TEST_HEADER include/spdk/file.h 00:02:07.204 TEST_HEADER include/spdk/ftl.h 00:02:07.204 TEST_HEADER include/spdk/gpt_spec.h 00:02:07.204 TEST_HEADER include/spdk/hexlify.h 00:02:07.204 TEST_HEADER include/spdk/histogram_data.h 00:02:07.204 TEST_HEADER include/spdk/idxd.h 00:02:07.204 TEST_HEADER include/spdk/idxd_spec.h 00:02:07.204 TEST_HEADER include/spdk/init.h 00:02:07.204 TEST_HEADER include/spdk/ioat.h 00:02:07.204 TEST_HEADER include/spdk/ioat_spec.h 00:02:07.204 TEST_HEADER include/spdk/json.h 00:02:07.204 TEST_HEADER include/spdk/iscsi_spec.h 00:02:07.204 TEST_HEADER include/spdk/likely.h 00:02:07.204 TEST_HEADER include/spdk/jsonrpc.h 00:02:07.204 TEST_HEADER include/spdk/log.h 00:02:07.204 TEST_HEADER include/spdk/lvol.h 00:02:07.204 TEST_HEADER include/spdk/memory.h 00:02:07.204 TEST_HEADER include/spdk/mmio.h 00:02:07.204 TEST_HEADER include/spdk/notify.h 00:02:07.204 TEST_HEADER include/spdk/nbd.h 00:02:07.204 TEST_HEADER include/spdk/nvme.h 00:02:07.204 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:07.204 TEST_HEADER include/spdk/nvme_intel.h 00:02:07.204 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:07.204 TEST_HEADER include/spdk/nvme_spec.h 00:02:07.204 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:07.204 TEST_HEADER include/spdk/nvme_zns.h 00:02:07.204 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:07.204 TEST_HEADER include/spdk/nvmf.h 00:02:07.204 CC app/iscsi_tgt/iscsi_tgt.o 00:02:07.204 TEST_HEADER include/spdk/nvmf_transport.h 00:02:07.204 TEST_HEADER include/spdk/nvmf_spec.h 00:02:07.204 TEST_HEADER include/spdk/opal_spec.h 00:02:07.204 TEST_HEADER include/spdk/opal.h 00:02:07.204 TEST_HEADER include/spdk/pci_ids.h 00:02:07.204 CC app/spdk_tgt/spdk_tgt.o 00:02:07.204 TEST_HEADER include/spdk/pipe.h 00:02:07.204 TEST_HEADER include/spdk/queue.h 00:02:07.204 TEST_HEADER include/spdk/reduce.h 00:02:07.204 TEST_HEADER include/spdk/scheduler.h 00:02:07.204 TEST_HEADER include/spdk/rpc.h 00:02:07.204 TEST_HEADER include/spdk/scsi.h 00:02:07.204 TEST_HEADER include/spdk/scsi_spec.h 00:02:07.204 TEST_HEADER include/spdk/sock.h 00:02:07.204 TEST_HEADER include/spdk/stdinc.h 00:02:07.204 TEST_HEADER include/spdk/string.h 00:02:07.204 TEST_HEADER include/spdk/thread.h 00:02:07.204 TEST_HEADER include/spdk/trace_parser.h 00:02:07.204 TEST_HEADER include/spdk/trace.h 00:02:07.204 TEST_HEADER include/spdk/tree.h 00:02:07.204 TEST_HEADER include/spdk/ublk.h 00:02:07.204 TEST_HEADER include/spdk/util.h 00:02:07.205 TEST_HEADER include/spdk/uuid.h 00:02:07.205 TEST_HEADER include/spdk/version.h 00:02:07.205 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:07.205 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:07.205 TEST_HEADER include/spdk/vhost.h 00:02:07.205 TEST_HEADER include/spdk/vmd.h 00:02:07.205 TEST_HEADER include/spdk/xor.h 00:02:07.205 TEST_HEADER include/spdk/zipf.h 00:02:07.205 CXX test/cpp_headers/accel.o 00:02:07.205 CXX test/cpp_headers/accel_module.o 00:02:07.205 CXX test/cpp_headers/assert.o 00:02:07.205 CXX test/cpp_headers/barrier.o 00:02:07.205 CXX test/cpp_headers/bdev.o 00:02:07.205 CXX test/cpp_headers/base64.o 00:02:07.205 CXX test/cpp_headers/bdev_module.o 00:02:07.205 CXX test/cpp_headers/bit_array.o 00:02:07.205 CXX test/cpp_headers/bit_pool.o 00:02:07.205 CXX test/cpp_headers/bdev_zone.o 00:02:07.205 CXX test/cpp_headers/blobfs_bdev.o 00:02:07.205 CXX test/cpp_headers/blob_bdev.o 00:02:07.205 CXX test/cpp_headers/blob.o 00:02:07.205 CXX test/cpp_headers/blobfs.o 00:02:07.205 CXX test/cpp_headers/config.o 00:02:07.205 CXX test/cpp_headers/conf.o 00:02:07.205 CXX test/cpp_headers/cpuset.o 00:02:07.205 CXX test/cpp_headers/crc16.o 00:02:07.205 CC examples/idxd/perf/perf.o 00:02:07.205 CXX test/cpp_headers/crc32.o 00:02:07.205 CXX test/cpp_headers/crc64.o 00:02:07.205 CXX test/cpp_headers/dif.o 00:02:07.205 CXX test/cpp_headers/dma.o 00:02:07.205 CXX test/cpp_headers/env_dpdk.o 00:02:07.205 CXX test/cpp_headers/endian.o 00:02:07.205 CXX test/cpp_headers/env.o 00:02:07.205 CXX test/cpp_headers/event.o 00:02:07.205 CXX test/cpp_headers/fd_group.o 00:02:07.205 CXX test/cpp_headers/fd.o 00:02:07.205 CC examples/vmd/lsvmd/lsvmd.o 00:02:07.205 CXX test/cpp_headers/file.o 00:02:07.205 CC examples/util/zipf/zipf.o 00:02:07.205 CXX test/cpp_headers/ftl.o 00:02:07.205 CXX test/cpp_headers/gpt_spec.o 00:02:07.205 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:07.205 CC examples/nvme/hotplug/hotplug.o 00:02:07.205 CXX test/cpp_headers/hexlify.o 00:02:07.205 CXX test/cpp_headers/histogram_data.o 00:02:07.205 CC examples/accel/perf/accel_perf.o 00:02:07.205 CC examples/nvme/arbitration/arbitration.o 00:02:07.205 CXX test/cpp_headers/idxd_spec.o 00:02:07.205 CXX test/cpp_headers/idxd.o 00:02:07.205 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:07.205 CXX test/cpp_headers/ioat.o 00:02:07.205 CC examples/sock/hello_world/hello_sock.o 00:02:07.474 CXX test/cpp_headers/init.o 00:02:07.474 CC examples/nvme/reconnect/reconnect.o 00:02:07.474 CC app/fio/nvme/fio_plugin.o 00:02:07.474 CC test/env/vtophys/vtophys.o 00:02:07.474 CC examples/nvme/hello_world/hello_world.o 00:02:07.474 CC examples/vmd/led/led.o 00:02:07.474 CC examples/ioat/perf/perf.o 00:02:07.474 CC test/env/memory/memory_ut.o 00:02:07.474 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:07.474 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:07.474 CC examples/nvme/abort/abort.o 00:02:07.474 CC examples/ioat/verify/verify.o 00:02:07.474 CC test/nvme/sgl/sgl.o 00:02:07.474 CC test/thread/poller_perf/poller_perf.o 00:02:07.474 CC test/app/stub/stub.o 00:02:07.474 CC test/app/histogram_perf/histogram_perf.o 00:02:07.474 CC test/env/pci/pci_ut.o 00:02:07.474 CC test/app/jsoncat/jsoncat.o 00:02:07.474 CC test/nvme/reset/reset.o 00:02:07.474 CC test/nvme/err_injection/err_injection.o 00:02:07.474 CC test/nvme/fdp/fdp.o 00:02:07.474 CC test/nvme/e2edp/nvme_dp.o 00:02:07.474 CC test/nvme/fused_ordering/fused_ordering.o 00:02:07.474 CC test/nvme/connect_stress/connect_stress.o 00:02:07.474 CC test/nvme/overhead/overhead.o 00:02:07.474 CC test/nvme/aer/aer.o 00:02:07.474 CC test/nvme/boot_partition/boot_partition.o 00:02:07.474 CC test/nvme/reserve/reserve.o 00:02:07.474 CC test/nvme/compliance/nvme_compliance.o 00:02:07.474 CC test/event/reactor/reactor.o 00:02:07.474 CC examples/thread/thread/thread_ex.o 00:02:07.474 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:07.474 CC test/nvme/cuse/cuse.o 00:02:07.474 CC test/accel/dif/dif.o 00:02:07.474 CC examples/blob/cli/blobcli.o 00:02:07.474 CC test/event/reactor_perf/reactor_perf.o 00:02:07.474 CC test/nvme/startup/startup.o 00:02:07.474 CC examples/blob/hello_world/hello_blob.o 00:02:07.474 CC test/nvme/simple_copy/simple_copy.o 00:02:07.474 CC test/event/event_perf/event_perf.o 00:02:07.474 CC test/event/app_repeat/app_repeat.o 00:02:07.474 CC test/bdev/bdevio/bdevio.o 00:02:07.474 CC examples/nvmf/nvmf/nvmf.o 00:02:07.474 CC examples/bdev/hello_world/hello_bdev.o 00:02:07.474 CC examples/bdev/bdevperf/bdevperf.o 00:02:07.474 CC app/fio/bdev/fio_plugin.o 00:02:07.474 CC test/blobfs/mkfs/mkfs.o 00:02:07.474 CC test/app/bdev_svc/bdev_svc.o 00:02:07.474 CC test/dma/test_dma/test_dma.o 00:02:07.474 CC test/event/scheduler/scheduler.o 00:02:07.474 LINK spdk_lspci 00:02:07.474 CC test/env/mem_callbacks/mem_callbacks.o 00:02:07.743 CC test/lvol/esnap/esnap.o 00:02:07.743 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:07.743 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:07.743 LINK vhost 00:02:07.743 LINK nvmf_tgt 00:02:07.743 LINK interrupt_tgt 00:02:07.743 LINK rpc_client_test 00:02:07.743 LINK spdk_nvme_discover 00:02:08.005 LINK lsvmd 00:02:08.005 LINK spdk_tgt 00:02:08.005 LINK poller_perf 00:02:08.005 LINK jsoncat 00:02:08.005 LINK vtophys 00:02:08.005 LINK zipf 00:02:08.005 LINK spdk_trace_record 00:02:08.005 LINK histogram_perf 00:02:08.005 LINK led 00:02:08.005 LINK pmr_persistence 00:02:08.005 LINK reactor_perf 00:02:08.005 LINK env_dpdk_post_init 00:02:08.005 LINK stub 00:02:08.005 LINK reactor 00:02:08.005 LINK iscsi_tgt 00:02:08.005 LINK event_perf 00:02:08.005 LINK boot_partition 00:02:08.005 LINK app_repeat 00:02:08.005 LINK startup 00:02:08.005 LINK cmb_copy 00:02:08.005 CXX test/cpp_headers/ioat_spec.o 00:02:08.005 CXX test/cpp_headers/iscsi_spec.o 00:02:08.005 LINK connect_stress 00:02:08.005 CXX test/cpp_headers/json.o 00:02:08.005 LINK err_injection 00:02:08.005 CXX test/cpp_headers/jsonrpc.o 00:02:08.005 LINK ioat_perf 00:02:08.005 CXX test/cpp_headers/likely.o 00:02:08.005 LINK doorbell_aers 00:02:08.005 CXX test/cpp_headers/log.o 00:02:08.005 CXX test/cpp_headers/lvol.o 00:02:08.005 CXX test/cpp_headers/memory.o 00:02:08.005 CXX test/cpp_headers/mmio.o 00:02:08.005 LINK hotplug 00:02:08.005 CXX test/cpp_headers/nbd.o 00:02:08.005 LINK mkfs 00:02:08.005 LINK reserve 00:02:08.005 CXX test/cpp_headers/notify.o 00:02:08.005 CXX test/cpp_headers/nvme.o 00:02:08.005 LINK hello_sock 00:02:08.005 CXX test/cpp_headers/nvme_intel.o 00:02:08.005 CXX test/cpp_headers/nvme_ocssd.o 00:02:08.005 LINK hello_world 00:02:08.005 LINK fused_ordering 00:02:08.005 LINK bdev_svc 00:02:08.005 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:08.005 CXX test/cpp_headers/nvme_spec.o 00:02:08.005 CXX test/cpp_headers/nvme_zns.o 00:02:08.005 CXX test/cpp_headers/nvmf_cmd.o 00:02:08.005 LINK verify 00:02:08.005 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:08.005 CXX test/cpp_headers/nvmf.o 00:02:08.005 CXX test/cpp_headers/nvmf_spec.o 00:02:08.005 CXX test/cpp_headers/nvmf_transport.o 00:02:08.005 CXX test/cpp_headers/opal.o 00:02:08.005 CXX test/cpp_headers/opal_spec.o 00:02:08.005 CXX test/cpp_headers/pci_ids.o 00:02:08.005 CXX test/cpp_headers/pipe.o 00:02:08.005 CXX test/cpp_headers/reduce.o 00:02:08.006 CXX test/cpp_headers/queue.o 00:02:08.006 CXX test/cpp_headers/rpc.o 00:02:08.006 CXX test/cpp_headers/scsi.o 00:02:08.006 CXX test/cpp_headers/scsi_spec.o 00:02:08.006 CXX test/cpp_headers/scheduler.o 00:02:08.006 CXX test/cpp_headers/sock.o 00:02:08.006 CXX test/cpp_headers/stdinc.o 00:02:08.006 LINK spdk_dd 00:02:08.006 CXX test/cpp_headers/string.o 00:02:08.006 LINK hello_blob 00:02:08.006 CXX test/cpp_headers/thread.o 00:02:08.006 LINK hello_bdev 00:02:08.006 CXX test/cpp_headers/trace.o 00:02:08.270 LINK simple_copy 00:02:08.270 LINK thread 00:02:08.270 LINK sgl 00:02:08.270 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:08.270 LINK reset 00:02:08.270 LINK overhead 00:02:08.270 CXX test/cpp_headers/trace_parser.o 00:02:08.270 LINK scheduler 00:02:08.270 LINK nvme_dp 00:02:08.270 LINK nvmf 00:02:08.270 LINK aer 00:02:08.270 LINK fdp 00:02:08.270 LINK reconnect 00:02:08.270 LINK arbitration 00:02:08.270 LINK spdk_trace 00:02:08.270 CXX test/cpp_headers/tree.o 00:02:08.270 LINK nvme_compliance 00:02:08.270 LINK idxd_perf 00:02:08.270 LINK dif 00:02:08.270 CXX test/cpp_headers/ublk.o 00:02:08.270 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:08.270 CXX test/cpp_headers/util.o 00:02:08.270 CXX test/cpp_headers/uuid.o 00:02:08.270 CXX test/cpp_headers/version.o 00:02:08.270 CXX test/cpp_headers/vfio_user_pci.o 00:02:08.270 LINK pci_ut 00:02:08.270 CXX test/cpp_headers/vfio_user_spec.o 00:02:08.270 CXX test/cpp_headers/vhost.o 00:02:08.270 CXX test/cpp_headers/vmd.o 00:02:08.270 CXX test/cpp_headers/xor.o 00:02:08.270 LINK abort 00:02:08.270 CXX test/cpp_headers/zipf.o 00:02:08.270 LINK test_dma 00:02:08.270 LINK accel_perf 00:02:08.529 LINK bdevio 00:02:08.529 LINK nvme_manage 00:02:08.529 LINK blobcli 00:02:08.529 LINK nvme_fuzz 00:02:08.529 LINK spdk_bdev 00:02:08.529 LINK spdk_nvme 00:02:08.788 LINK spdk_nvme_perf 00:02:08.788 LINK mem_callbacks 00:02:08.788 LINK spdk_top 00:02:08.788 LINK spdk_nvme_identify 00:02:08.788 LINK bdevperf 00:02:08.788 LINK memory_ut 00:02:08.788 LINK vhost_fuzz 00:02:09.048 LINK cuse 00:02:09.616 LINK iscsi_fuzz 00:02:11.525 LINK esnap 00:02:11.525 00:02:11.525 real 0m30.513s 00:02:11.525 user 4m59.549s 00:02:11.525 sys 2m50.352s 00:02:11.525 22:47:43 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:11.525 22:47:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.525 ************************************ 00:02:11.525 END TEST make 00:02:11.525 ************************************ 00:02:11.525 22:47:43 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:11.525 22:47:43 -- nvmf/common.sh@7 -- # uname -s 00:02:11.525 22:47:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:11.525 22:47:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:11.525 22:47:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:11.525 22:47:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:11.525 22:47:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:11.525 22:47:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:11.525 22:47:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:11.525 22:47:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:11.525 22:47:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:11.525 22:47:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:11.525 22:47:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:02:11.525 22:47:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:02:11.525 22:47:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:11.525 22:47:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:11.525 22:47:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:11.525 22:47:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:11.525 22:47:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:11.526 22:47:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:11.526 22:47:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:11.526 22:47:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.526 22:47:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.526 22:47:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.526 22:47:43 -- paths/export.sh@5 -- # export PATH 00:02:11.526 22:47:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.526 22:47:43 -- nvmf/common.sh@46 -- # : 0 00:02:11.526 22:47:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:11.526 22:47:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:11.526 22:47:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:11.526 22:47:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:11.526 22:47:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:11.526 22:47:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:11.526 22:47:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:11.526 22:47:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:11.526 22:47:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:11.526 22:47:43 -- spdk/autotest.sh@32 -- # uname -s 00:02:11.526 22:47:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:11.526 22:47:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:11.526 22:47:43 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:11.526 22:47:43 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:11.526 22:47:43 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:11.526 22:47:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:11.786 22:47:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:11.786 22:47:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:11.786 22:47:43 -- spdk/autotest.sh@48 -- # udevadm_pid=2969951 00:02:11.786 22:47:43 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:11.786 22:47:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:11.786 22:47:43 -- spdk/autotest.sh@54 -- # echo 2969953 00:02:11.786 22:47:43 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:11.786 22:47:43 -- spdk/autotest.sh@56 -- # echo 2969954 00:02:11.786 22:47:43 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:11.786 22:47:43 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:11.786 22:47:43 -- spdk/autotest.sh@60 -- # echo 2969955 00:02:11.786 22:47:43 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:11.786 22:47:43 -- spdk/autotest.sh@62 -- # echo 2969956 00:02:11.786 22:47:43 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:11.787 22:47:43 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:11.787 22:47:43 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:11.787 22:47:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:11.787 22:47:43 -- common/autotest_common.sh@10 -- # set +x 00:02:11.787 22:47:43 -- spdk/autotest.sh@70 -- # create_test_list 00:02:11.787 22:47:43 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:11.787 22:47:43 -- common/autotest_common.sh@10 -- # set +x 00:02:11.787 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:11.787 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:11.787 22:47:44 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:11.787 22:47:44 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.787 22:47:44 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.787 22:47:44 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:11.787 22:47:44 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.787 22:47:44 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:11.787 22:47:44 -- common/autotest_common.sh@1440 -- # uname 00:02:11.787 22:47:44 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:11.787 22:47:44 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:11.787 22:47:44 -- common/autotest_common.sh@1460 -- # uname 00:02:11.787 22:47:44 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:11.787 22:47:44 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:11.787 22:47:44 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:11.787 22:47:44 -- spdk/autotest.sh@83 -- # hash lcov 00:02:11.787 22:47:44 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:11.787 22:47:44 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:11.787 --rc lcov_branch_coverage=1 00:02:11.787 --rc lcov_function_coverage=1 00:02:11.787 --rc genhtml_branch_coverage=1 00:02:11.787 --rc genhtml_function_coverage=1 00:02:11.787 --rc genhtml_legend=1 00:02:11.787 --rc geninfo_all_blocks=1 00:02:11.787 ' 00:02:11.787 22:47:44 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:11.787 --rc lcov_branch_coverage=1 00:02:11.787 --rc lcov_function_coverage=1 00:02:11.787 --rc genhtml_branch_coverage=1 00:02:11.787 --rc genhtml_function_coverage=1 00:02:11.787 --rc genhtml_legend=1 00:02:11.787 --rc geninfo_all_blocks=1 00:02:11.787 ' 00:02:11.787 22:47:44 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:11.787 --rc lcov_branch_coverage=1 00:02:11.787 --rc lcov_function_coverage=1 00:02:11.787 --rc genhtml_branch_coverage=1 00:02:11.787 --rc genhtml_function_coverage=1 00:02:11.787 --rc genhtml_legend=1 00:02:11.787 --rc geninfo_all_blocks=1 00:02:11.787 --no-external' 00:02:11.787 22:47:44 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:11.787 --rc lcov_branch_coverage=1 00:02:11.787 --rc lcov_function_coverage=1 00:02:11.787 --rc genhtml_branch_coverage=1 00:02:11.787 --rc genhtml_function_coverage=1 00:02:11.787 --rc genhtml_legend=1 00:02:11.787 --rc geninfo_all_blocks=1 00:02:11.787 --no-external' 00:02:11.787 22:47:44 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:11.787 lcov: LCOV version 1.14 00:02:11.787 22:47:44 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:14.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:14.322 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:14.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:14.322 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:14.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:14.322 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:32.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:32.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:32.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:32.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:32.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:32.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:32.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:32.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:32.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:32.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:32.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:32.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:32.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:32.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:32.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:32.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:32.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:32.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:32.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:32.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:32.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:32.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:32.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:32.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:32.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:32.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:32.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:32.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:32.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:32.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:32.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:32.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:35.732 22:48:07 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:35.732 22:48:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:35.732 22:48:07 -- common/autotest_common.sh@10 -- # set +x 00:02:35.732 22:48:07 -- spdk/autotest.sh@102 -- # rm -f 00:02:35.732 22:48:07 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:39.016 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:39.016 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:39.016 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:39.016 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:39.016 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:39.016 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:39.016 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:39.016 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:39.016 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:39.016 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:39.016 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:39.275 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:39.275 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:39.275 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:39.275 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:39.275 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:39.275 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:39.275 22:48:11 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:02:39.275 22:48:11 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:39.275 22:48:11 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:39.275 22:48:11 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:39.275 22:48:11 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:39.275 22:48:11 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:39.275 22:48:11 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:39.275 22:48:11 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:39.275 22:48:11 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:39.275 22:48:11 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:02:39.275 22:48:11 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:02:39.275 22:48:11 -- spdk/autotest.sh@121 -- # grep -v p 00:02:39.275 22:48:11 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:39.275 22:48:11 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:39.275 22:48:11 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:02:39.275 22:48:11 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:39.275 22:48:11 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:39.275 No valid GPT data, bailing 00:02:39.275 22:48:11 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:39.275 22:48:11 -- scripts/common.sh@393 -- # pt= 00:02:39.275 22:48:11 -- scripts/common.sh@394 -- # return 1 00:02:39.275 22:48:11 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:39.275 1+0 records in 00:02:39.275 1+0 records out 00:02:39.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00602391 s, 174 MB/s 00:02:39.275 22:48:11 -- spdk/autotest.sh@129 -- # sync 00:02:39.275 22:48:11 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:39.275 22:48:11 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:39.275 22:48:11 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:45.840 22:48:17 -- spdk/autotest.sh@135 -- # uname -s 00:02:45.840 22:48:17 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:02:45.840 22:48:17 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:45.840 22:48:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:45.840 22:48:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:45.840 22:48:17 -- common/autotest_common.sh@10 -- # set +x 00:02:45.840 ************************************ 00:02:45.840 START TEST setup.sh 00:02:45.840 ************************************ 00:02:45.840 22:48:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:45.840 * Looking for test storage... 00:02:45.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:45.840 22:48:18 -- setup/test-setup.sh@10 -- # uname -s 00:02:45.840 22:48:18 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:45.840 22:48:18 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:45.840 22:48:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:45.840 22:48:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:45.840 22:48:18 -- common/autotest_common.sh@10 -- # set +x 00:02:45.840 ************************************ 00:02:45.840 START TEST acl 00:02:45.840 ************************************ 00:02:45.840 22:48:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:45.840 * Looking for test storage... 00:02:45.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:45.840 22:48:18 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:45.840 22:48:18 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:45.840 22:48:18 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:45.840 22:48:18 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:45.840 22:48:18 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:45.840 22:48:18 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:45.840 22:48:18 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:45.840 22:48:18 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:45.840 22:48:18 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:45.840 22:48:18 -- setup/acl.sh@12 -- # devs=() 00:02:45.840 22:48:18 -- setup/acl.sh@12 -- # declare -a devs 00:02:45.840 22:48:18 -- setup/acl.sh@13 -- # drivers=() 00:02:45.840 22:48:18 -- setup/acl.sh@13 -- # declare -A drivers 00:02:45.840 22:48:18 -- setup/acl.sh@51 -- # setup reset 00:02:45.840 22:48:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:45.840 22:48:18 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:50.032 22:48:22 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:50.032 22:48:22 -- setup/acl.sh@16 -- # local dev driver 00:02:50.032 22:48:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.032 22:48:22 -- setup/acl.sh@15 -- # setup output status 00:02:50.032 22:48:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:50.032 22:48:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:52.592 Hugepages 00:02:52.592 node hugesize free / total 00:02:52.592 22:48:24 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:52.592 22:48:24 -- setup/acl.sh@19 -- # continue 00:02:52.592 22:48:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.592 22:48:24 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:52.592 22:48:24 -- setup/acl.sh@19 -- # continue 00:02:52.592 22:48:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.592 22:48:24 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:52.592 22:48:24 -- setup/acl.sh@19 -- # continue 00:02:52.592 22:48:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.592 00:02:52.592 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:52.592 22:48:24 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:52.592 22:48:24 -- setup/acl.sh@19 -- # continue 00:02:52.592 22:48:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.592 22:48:24 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:52.592 22:48:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.592 22:48:24 -- setup/acl.sh@20 -- # continue 00:02:52.592 22:48:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.592 22:48:24 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:52.592 22:48:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.592 22:48:24 -- setup/acl.sh@20 -- # continue 00:02:52.592 22:48:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.593 22:48:24 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # continue 00:02:52.593 22:48:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.593 22:48:24 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # continue 00:02:52.593 22:48:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.593 22:48:24 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # continue 00:02:52.593 22:48:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.593 22:48:24 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # continue 00:02:52.593 22:48:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.593 22:48:24 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # continue 00:02:52.593 22:48:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.593 22:48:24 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # continue 00:02:52.593 22:48:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.593 22:48:24 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # continue 00:02:52.593 22:48:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.593 22:48:24 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # continue 00:02:52.593 22:48:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.593 22:48:24 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # continue 00:02:52.593 22:48:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.593 22:48:24 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # continue 00:02:52.593 22:48:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.593 22:48:24 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # continue 00:02:52.593 22:48:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.593 22:48:24 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # continue 00:02:52.593 22:48:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.593 22:48:24 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # continue 00:02:52.593 22:48:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.593 22:48:24 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # continue 00:02:52.593 22:48:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.593 22:48:24 -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:52.593 22:48:24 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:52.593 22:48:24 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:52.593 22:48:24 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:52.593 22:48:24 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:52.593 22:48:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.593 22:48:24 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:52.593 22:48:24 -- setup/acl.sh@54 -- # run_test denied denied 00:02:52.593 22:48:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:52.593 22:48:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:52.593 22:48:24 -- common/autotest_common.sh@10 -- # set +x 00:02:52.593 ************************************ 00:02:52.593 START TEST denied 00:02:52.593 ************************************ 00:02:52.593 22:48:24 -- common/autotest_common.sh@1104 -- # denied 00:02:52.593 22:48:24 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:52.593 22:48:24 -- setup/acl.sh@38 -- # setup output config 00:02:52.593 22:48:24 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:52.593 22:48:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.593 22:48:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:55.879 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:02:55.879 22:48:28 -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:02:55.879 22:48:28 -- setup/acl.sh@28 -- # local dev driver 00:02:55.879 22:48:28 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:55.879 22:48:28 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:02:55.879 22:48:28 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:02:55.879 22:48:28 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:55.879 22:48:28 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:55.879 22:48:28 -- setup/acl.sh@41 -- # setup reset 00:02:55.879 22:48:28 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:55.879 22:48:28 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:01.151 00:03:01.151 real 0m7.913s 00:03:01.151 user 0m2.419s 00:03:01.151 sys 0m4.788s 00:03:01.151 22:48:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:01.151 22:48:32 -- common/autotest_common.sh@10 -- # set +x 00:03:01.151 ************************************ 00:03:01.151 END TEST denied 00:03:01.151 ************************************ 00:03:01.151 22:48:32 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:01.151 22:48:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:01.151 22:48:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:01.151 22:48:32 -- common/autotest_common.sh@10 -- # set +x 00:03:01.151 ************************************ 00:03:01.151 START TEST allowed 00:03:01.151 ************************************ 00:03:01.151 22:48:32 -- common/autotest_common.sh@1104 -- # allowed 00:03:01.151 22:48:32 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:01.151 22:48:32 -- setup/acl.sh@45 -- # setup output config 00:03:01.151 22:48:32 -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:01.151 22:48:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.151 22:48:32 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:05.348 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:05.348 22:48:37 -- setup/acl.sh@47 -- # verify 00:03:05.348 22:48:37 -- setup/acl.sh@28 -- # local dev driver 00:03:05.348 22:48:37 -- setup/acl.sh@48 -- # setup reset 00:03:05.348 22:48:37 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:05.348 22:48:37 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.550 00:03:09.550 real 0m8.424s 00:03:09.550 user 0m2.246s 00:03:09.550 sys 0m4.714s 00:03:09.550 22:48:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.550 22:48:41 -- common/autotest_common.sh@10 -- # set +x 00:03:09.550 ************************************ 00:03:09.550 END TEST allowed 00:03:09.550 ************************************ 00:03:09.550 00:03:09.550 real 0m23.347s 00:03:09.550 user 0m7.033s 00:03:09.550 sys 0m14.302s 00:03:09.550 22:48:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.550 22:48:41 -- common/autotest_common.sh@10 -- # set +x 00:03:09.550 ************************************ 00:03:09.550 END TEST acl 00:03:09.550 ************************************ 00:03:09.550 22:48:41 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:09.550 22:48:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:09.550 22:48:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:09.550 22:48:41 -- common/autotest_common.sh@10 -- # set +x 00:03:09.550 ************************************ 00:03:09.550 START TEST hugepages 00:03:09.550 ************************************ 00:03:09.550 22:48:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:09.550 * Looking for test storage... 00:03:09.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:09.550 22:48:41 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:09.550 22:48:41 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:09.550 22:48:41 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:09.550 22:48:41 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:09.550 22:48:41 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:09.550 22:48:41 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:09.550 22:48:41 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:09.550 22:48:41 -- setup/common.sh@18 -- # local node= 00:03:09.550 22:48:41 -- setup/common.sh@19 -- # local var val 00:03:09.550 22:48:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.550 22:48:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.550 22:48:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.550 22:48:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.550 22:48:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.550 22:48:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.550 22:48:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 40221008 kB' 'MemAvailable: 44141180 kB' 'Buffers: 2704 kB' 'Cached: 11819912 kB' 'SwapCached: 0 kB' 'Active: 8667908 kB' 'Inactive: 3676228 kB' 'Active(anon): 8278044 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524896 kB' 'Mapped: 174572 kB' 'Shmem: 7756524 kB' 'KReclaimable: 501572 kB' 'Slab: 1137832 kB' 'SReclaimable: 501572 kB' 'SUnreclaim: 636260 kB' 'KernelStack: 22160 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439048 kB' 'Committed_AS: 9701320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216564 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.550 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.550 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # continue 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.551 22:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.551 22:48:41 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.551 22:48:41 -- setup/common.sh@33 -- # echo 2048 00:03:09.551 22:48:41 -- setup/common.sh@33 -- # return 0 00:03:09.551 22:48:41 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:09.551 22:48:41 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:09.551 22:48:41 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:09.551 22:48:41 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:09.551 22:48:41 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:09.551 22:48:41 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:09.551 22:48:41 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:09.551 22:48:41 -- setup/hugepages.sh@207 -- # get_nodes 00:03:09.551 22:48:41 -- setup/hugepages.sh@27 -- # local node 00:03:09.551 22:48:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.552 22:48:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:09.552 22:48:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.552 22:48:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:09.552 22:48:41 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:09.552 22:48:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:09.552 22:48:41 -- setup/hugepages.sh@208 -- # clear_hp 00:03:09.552 22:48:41 -- setup/hugepages.sh@37 -- # local node hp 00:03:09.552 22:48:41 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:09.552 22:48:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:09.552 22:48:41 -- setup/hugepages.sh@41 -- # echo 0 00:03:09.552 22:48:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:09.552 22:48:41 -- setup/hugepages.sh@41 -- # echo 0 00:03:09.552 22:48:41 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:09.552 22:48:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:09.552 22:48:41 -- setup/hugepages.sh@41 -- # echo 0 00:03:09.552 22:48:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:09.552 22:48:41 -- setup/hugepages.sh@41 -- # echo 0 00:03:09.552 22:48:41 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:09.552 22:48:41 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:09.552 22:48:41 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:09.552 22:48:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:09.552 22:48:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:09.552 22:48:41 -- common/autotest_common.sh@10 -- # set +x 00:03:09.552 ************************************ 00:03:09.552 START TEST default_setup 00:03:09.552 ************************************ 00:03:09.552 22:48:41 -- common/autotest_common.sh@1104 -- # default_setup 00:03:09.552 22:48:41 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:09.552 22:48:41 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:09.552 22:48:41 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:09.552 22:48:41 -- setup/hugepages.sh@51 -- # shift 00:03:09.552 22:48:41 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:09.552 22:48:41 -- setup/hugepages.sh@52 -- # local node_ids 00:03:09.552 22:48:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:09.552 22:48:41 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:09.552 22:48:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:09.552 22:48:41 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:09.552 22:48:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:09.552 22:48:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:09.552 22:48:41 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:09.552 22:48:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:09.552 22:48:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:09.552 22:48:41 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:09.552 22:48:41 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:09.552 22:48:41 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:09.552 22:48:41 -- setup/hugepages.sh@73 -- # return 0 00:03:09.552 22:48:41 -- setup/hugepages.sh@137 -- # setup output 00:03:09.552 22:48:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.552 22:48:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:12.845 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:12.845 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:12.845 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:12.845 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:12.845 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:12.845 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:12.845 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:12.845 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:12.845 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:12.846 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:12.846 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:12.846 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:12.846 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:12.846 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:12.846 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:12.846 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:14.225 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:14.489 22:48:46 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:14.489 22:48:46 -- setup/hugepages.sh@89 -- # local node 00:03:14.489 22:48:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:14.489 22:48:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:14.489 22:48:46 -- setup/hugepages.sh@92 -- # local surp 00:03:14.489 22:48:46 -- setup/hugepages.sh@93 -- # local resv 00:03:14.489 22:48:46 -- setup/hugepages.sh@94 -- # local anon 00:03:14.489 22:48:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:14.489 22:48:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:14.489 22:48:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:14.489 22:48:46 -- setup/common.sh@18 -- # local node= 00:03:14.489 22:48:46 -- setup/common.sh@19 -- # local var val 00:03:14.489 22:48:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.489 22:48:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.489 22:48:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.489 22:48:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.489 22:48:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.489 22:48:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42384916 kB' 'MemAvailable: 46304952 kB' 'Buffers: 2704 kB' 'Cached: 11820036 kB' 'SwapCached: 0 kB' 'Active: 8683456 kB' 'Inactive: 3676228 kB' 'Active(anon): 8293592 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540396 kB' 'Mapped: 174780 kB' 'Shmem: 7756648 kB' 'KReclaimable: 501436 kB' 'Slab: 1135716 kB' 'SReclaimable: 501436 kB' 'SUnreclaim: 634280 kB' 'KernelStack: 22272 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9718392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216644 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.489 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.489 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.490 22:48:46 -- setup/common.sh@33 -- # echo 0 00:03:14.490 22:48:46 -- setup/common.sh@33 -- # return 0 00:03:14.490 22:48:46 -- setup/hugepages.sh@97 -- # anon=0 00:03:14.490 22:48:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:14.490 22:48:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.490 22:48:46 -- setup/common.sh@18 -- # local node= 00:03:14.490 22:48:46 -- setup/common.sh@19 -- # local var val 00:03:14.490 22:48:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.490 22:48:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.490 22:48:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.490 22:48:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.490 22:48:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.490 22:48:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42386732 kB' 'MemAvailable: 46306768 kB' 'Buffers: 2704 kB' 'Cached: 11820036 kB' 'SwapCached: 0 kB' 'Active: 8683636 kB' 'Inactive: 3676228 kB' 'Active(anon): 8293772 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540552 kB' 'Mapped: 174780 kB' 'Shmem: 7756648 kB' 'KReclaimable: 501436 kB' 'Slab: 1135716 kB' 'SReclaimable: 501436 kB' 'SUnreclaim: 634280 kB' 'KernelStack: 22368 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9718400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216644 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.490 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.490 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.491 22:48:46 -- setup/common.sh@33 -- # echo 0 00:03:14.491 22:48:46 -- setup/common.sh@33 -- # return 0 00:03:14.491 22:48:46 -- setup/hugepages.sh@99 -- # surp=0 00:03:14.491 22:48:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:14.491 22:48:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:14.491 22:48:46 -- setup/common.sh@18 -- # local node= 00:03:14.491 22:48:46 -- setup/common.sh@19 -- # local var val 00:03:14.491 22:48:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.491 22:48:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.491 22:48:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.491 22:48:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.491 22:48:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.491 22:48:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42386668 kB' 'MemAvailable: 46306704 kB' 'Buffers: 2704 kB' 'Cached: 11820052 kB' 'SwapCached: 0 kB' 'Active: 8683264 kB' 'Inactive: 3676228 kB' 'Active(anon): 8293400 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540084 kB' 'Mapped: 174832 kB' 'Shmem: 7756664 kB' 'KReclaimable: 501436 kB' 'Slab: 1135712 kB' 'SReclaimable: 501436 kB' 'SUnreclaim: 634276 kB' 'KernelStack: 22336 kB' 'PageTables: 9068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9718416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216660 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.491 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.491 22:48:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.492 22:48:46 -- setup/common.sh@33 -- # echo 0 00:03:14.492 22:48:46 -- setup/common.sh@33 -- # return 0 00:03:14.492 22:48:46 -- setup/hugepages.sh@100 -- # resv=0 00:03:14.492 22:48:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:14.492 nr_hugepages=1024 00:03:14.492 22:48:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:14.492 resv_hugepages=0 00:03:14.492 22:48:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:14.492 surplus_hugepages=0 00:03:14.492 22:48:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:14.492 anon_hugepages=0 00:03:14.492 22:48:46 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.492 22:48:46 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:14.492 22:48:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:14.492 22:48:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:14.492 22:48:46 -- setup/common.sh@18 -- # local node= 00:03:14.492 22:48:46 -- setup/common.sh@19 -- # local var val 00:03:14.492 22:48:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.492 22:48:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.492 22:48:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.492 22:48:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.492 22:48:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.492 22:48:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42386328 kB' 'MemAvailable: 46306364 kB' 'Buffers: 2704 kB' 'Cached: 11820052 kB' 'SwapCached: 0 kB' 'Active: 8683364 kB' 'Inactive: 3676228 kB' 'Active(anon): 8293500 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540184 kB' 'Mapped: 174832 kB' 'Shmem: 7756664 kB' 'KReclaimable: 501436 kB' 'Slab: 1135776 kB' 'SReclaimable: 501436 kB' 'SUnreclaim: 634340 kB' 'KernelStack: 22384 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9718432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216740 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.492 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.492 22:48:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 22:48:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.494 22:48:46 -- setup/common.sh@33 -- # echo 1024 00:03:14.494 22:48:46 -- setup/common.sh@33 -- # return 0 00:03:14.494 22:48:46 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.494 22:48:46 -- setup/hugepages.sh@112 -- # get_nodes 00:03:14.494 22:48:46 -- setup/hugepages.sh@27 -- # local node 00:03:14.494 22:48:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.494 22:48:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:14.494 22:48:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.494 22:48:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:14.494 22:48:46 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:14.494 22:48:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:14.494 22:48:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.494 22:48:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.494 22:48:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:14.494 22:48:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.494 22:48:46 -- setup/common.sh@18 -- # local node=0 00:03:14.494 22:48:46 -- setup/common.sh@19 -- # local var val 00:03:14.494 22:48:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.494 22:48:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.494 22:48:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:14.494 22:48:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:14.494 22:48:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.494 22:48:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26358748 kB' 'MemUsed: 6233336 kB' 'SwapCached: 0 kB' 'Active: 2298448 kB' 'Inactive: 275656 kB' 'Active(anon): 2138596 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 275656 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2419972 kB' 'Mapped: 79632 kB' 'AnonPages: 157320 kB' 'Shmem: 1984464 kB' 'KernelStack: 12808 kB' 'PageTables: 3544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 161576 kB' 'Slab: 438692 kB' 'SReclaimable: 161576 kB' 'SUnreclaim: 277116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.494 22:48:46 -- setup/common.sh@32 -- # continue 00:03:14.494 22:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.495 22:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.495 22:48:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.495 22:48:46 -- setup/common.sh@33 -- # echo 0 00:03:14.495 22:48:46 -- setup/common.sh@33 -- # return 0 00:03:14.495 22:48:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.495 22:48:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.495 22:48:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.495 22:48:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.495 22:48:46 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:14.495 node0=1024 expecting 1024 00:03:14.495 22:48:46 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:14.495 00:03:14.495 real 0m5.194s 00:03:14.495 user 0m1.410s 00:03:14.495 sys 0m2.361s 00:03:14.495 22:48:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.495 22:48:46 -- common/autotest_common.sh@10 -- # set +x 00:03:14.495 ************************************ 00:03:14.495 END TEST default_setup 00:03:14.495 ************************************ 00:03:14.495 22:48:46 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:14.495 22:48:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:14.495 22:48:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:14.495 22:48:46 -- common/autotest_common.sh@10 -- # set +x 00:03:14.495 ************************************ 00:03:14.495 START TEST per_node_1G_alloc 00:03:14.495 ************************************ 00:03:14.495 22:48:46 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:14.495 22:48:46 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:14.495 22:48:46 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:14.495 22:48:46 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:14.495 22:48:46 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:14.495 22:48:46 -- setup/hugepages.sh@51 -- # shift 00:03:14.495 22:48:46 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:14.495 22:48:46 -- setup/hugepages.sh@52 -- # local node_ids 00:03:14.495 22:48:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:14.495 22:48:46 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:14.495 22:48:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:14.495 22:48:46 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:14.495 22:48:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:14.495 22:48:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:14.495 22:48:46 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:14.495 22:48:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:14.495 22:48:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:14.495 22:48:46 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:14.495 22:48:46 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:14.495 22:48:46 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:14.495 22:48:46 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:14.495 22:48:46 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:14.495 22:48:46 -- setup/hugepages.sh@73 -- # return 0 00:03:14.495 22:48:46 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:14.495 22:48:46 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:14.495 22:48:46 -- setup/hugepages.sh@146 -- # setup output 00:03:14.495 22:48:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.495 22:48:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:17.845 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:17.845 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:17.845 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:17.845 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:17.845 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:17.845 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:17.845 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:17.845 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:17.845 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:17.845 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:17.845 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:17.845 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:17.845 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.108 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.108 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.108 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.108 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:18.108 22:48:50 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:18.108 22:48:50 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:18.108 22:48:50 -- setup/hugepages.sh@89 -- # local node 00:03:18.108 22:48:50 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.108 22:48:50 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.108 22:48:50 -- setup/hugepages.sh@92 -- # local surp 00:03:18.108 22:48:50 -- setup/hugepages.sh@93 -- # local resv 00:03:18.108 22:48:50 -- setup/hugepages.sh@94 -- # local anon 00:03:18.108 22:48:50 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.108 22:48:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.108 22:48:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.108 22:48:50 -- setup/common.sh@18 -- # local node= 00:03:18.108 22:48:50 -- setup/common.sh@19 -- # local var val 00:03:18.108 22:48:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.108 22:48:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.108 22:48:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.108 22:48:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.108 22:48:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.108 22:48:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.108 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.108 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.108 22:48:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42381524 kB' 'MemAvailable: 46301560 kB' 'Buffers: 2704 kB' 'Cached: 11820168 kB' 'SwapCached: 0 kB' 'Active: 8681240 kB' 'Inactive: 3676228 kB' 'Active(anon): 8291376 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537856 kB' 'Mapped: 173728 kB' 'Shmem: 7756780 kB' 'KReclaimable: 501436 kB' 'Slab: 1135868 kB' 'SReclaimable: 501436 kB' 'SUnreclaim: 634432 kB' 'KernelStack: 22128 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9706512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216724 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:18.108 22:48:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.108 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.108 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.108 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.108 22:48:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.108 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.108 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.108 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.108 22:48:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.108 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.108 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.108 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.109 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.109 22:48:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.109 22:48:50 -- setup/common.sh@33 -- # echo 0 00:03:18.109 22:48:50 -- setup/common.sh@33 -- # return 0 00:03:18.109 22:48:50 -- setup/hugepages.sh@97 -- # anon=0 00:03:18.109 22:48:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.109 22:48:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.109 22:48:50 -- setup/common.sh@18 -- # local node= 00:03:18.109 22:48:50 -- setup/common.sh@19 -- # local var val 00:03:18.110 22:48:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.110 22:48:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.110 22:48:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.110 22:48:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.110 22:48:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.110 22:48:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42383628 kB' 'MemAvailable: 46303664 kB' 'Buffers: 2704 kB' 'Cached: 11820176 kB' 'SwapCached: 0 kB' 'Active: 8681380 kB' 'Inactive: 3676228 kB' 'Active(anon): 8291516 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537992 kB' 'Mapped: 173728 kB' 'Shmem: 7756788 kB' 'KReclaimable: 501436 kB' 'Slab: 1135824 kB' 'SReclaimable: 501436 kB' 'SUnreclaim: 634388 kB' 'KernelStack: 22080 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9706660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216676 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.110 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.110 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.111 22:48:50 -- setup/common.sh@33 -- # echo 0 00:03:18.111 22:48:50 -- setup/common.sh@33 -- # return 0 00:03:18.111 22:48:50 -- setup/hugepages.sh@99 -- # surp=0 00:03:18.111 22:48:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.111 22:48:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.111 22:48:50 -- setup/common.sh@18 -- # local node= 00:03:18.111 22:48:50 -- setup/common.sh@19 -- # local var val 00:03:18.111 22:48:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.111 22:48:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.111 22:48:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.111 22:48:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.111 22:48:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.111 22:48:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42382372 kB' 'MemAvailable: 46302408 kB' 'Buffers: 2704 kB' 'Cached: 11820184 kB' 'SwapCached: 0 kB' 'Active: 8681800 kB' 'Inactive: 3676228 kB' 'Active(anon): 8291936 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538500 kB' 'Mapped: 173712 kB' 'Shmem: 7756796 kB' 'KReclaimable: 501436 kB' 'Slab: 1135876 kB' 'SReclaimable: 501436 kB' 'SUnreclaim: 634440 kB' 'KernelStack: 22224 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9707044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216708 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.111 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.111 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.112 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.112 22:48:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.113 22:48:50 -- setup/common.sh@33 -- # echo 0 00:03:18.113 22:48:50 -- setup/common.sh@33 -- # return 0 00:03:18.113 22:48:50 -- setup/hugepages.sh@100 -- # resv=0 00:03:18.113 22:48:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:18.113 nr_hugepages=1024 00:03:18.113 22:48:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.113 resv_hugepages=0 00:03:18.113 22:48:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.113 surplus_hugepages=0 00:03:18.113 22:48:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.113 anon_hugepages=0 00:03:18.113 22:48:50 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.113 22:48:50 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:18.113 22:48:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.113 22:48:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.113 22:48:50 -- setup/common.sh@18 -- # local node= 00:03:18.113 22:48:50 -- setup/common.sh@19 -- # local var val 00:03:18.113 22:48:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.113 22:48:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.113 22:48:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.113 22:48:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.113 22:48:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.113 22:48:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.113 22:48:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42381616 kB' 'MemAvailable: 46301644 kB' 'Buffers: 2704 kB' 'Cached: 11820212 kB' 'SwapCached: 0 kB' 'Active: 8681084 kB' 'Inactive: 3676228 kB' 'Active(anon): 8291220 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537800 kB' 'Mapped: 173712 kB' 'Shmem: 7756824 kB' 'KReclaimable: 501428 kB' 'Slab: 1135868 kB' 'SReclaimable: 501428 kB' 'SUnreclaim: 634440 kB' 'KernelStack: 22160 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9707056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216676 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.113 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.113 22:48:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.114 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.114 22:48:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.115 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.115 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.115 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.115 22:48:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.115 22:48:50 -- setup/common.sh@33 -- # echo 1024 00:03:18.115 22:48:50 -- setup/common.sh@33 -- # return 0 00:03:18.115 22:48:50 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.115 22:48:50 -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.115 22:48:50 -- setup/hugepages.sh@27 -- # local node 00:03:18.445 22:48:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.445 22:48:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.445 22:48:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.445 22:48:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.445 22:48:50 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.445 22:48:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.445 22:48:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.445 22:48:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.445 22:48:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.445 22:48:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.445 22:48:50 -- setup/common.sh@18 -- # local node=0 00:03:18.445 22:48:50 -- setup/common.sh@19 -- # local var val 00:03:18.445 22:48:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.445 22:48:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.445 22:48:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.445 22:48:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.445 22:48:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.445 22:48:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 27414676 kB' 'MemUsed: 5177408 kB' 'SwapCached: 0 kB' 'Active: 2297680 kB' 'Inactive: 275656 kB' 'Active(anon): 2137828 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 275656 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2420052 kB' 'Mapped: 78776 kB' 'AnonPages: 156520 kB' 'Shmem: 1984544 kB' 'KernelStack: 12808 kB' 'PageTables: 3564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 161568 kB' 'Slab: 438760 kB' 'SReclaimable: 161568 kB' 'SUnreclaim: 277192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.445 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.445 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@33 -- # echo 0 00:03:18.446 22:48:50 -- setup/common.sh@33 -- # return 0 00:03:18.446 22:48:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.446 22:48:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.446 22:48:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.446 22:48:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:18.446 22:48:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.446 22:48:50 -- setup/common.sh@18 -- # local node=1 00:03:18.446 22:48:50 -- setup/common.sh@19 -- # local var val 00:03:18.446 22:48:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.446 22:48:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.446 22:48:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:18.446 22:48:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:18.446 22:48:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.446 22:48:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 14966732 kB' 'MemUsed: 12736376 kB' 'SwapCached: 0 kB' 'Active: 6388700 kB' 'Inactive: 3400572 kB' 'Active(anon): 6158688 kB' 'Inactive(anon): 0 kB' 'Active(file): 230012 kB' 'Inactive(file): 3400572 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9402868 kB' 'Mapped: 95440 kB' 'AnonPages: 387088 kB' 'Shmem: 5772284 kB' 'KernelStack: 9352 kB' 'PageTables: 4688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 339860 kB' 'Slab: 697108 kB' 'SReclaimable: 339860 kB' 'SUnreclaim: 357248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.446 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.446 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # continue 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.447 22:48:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.447 22:48:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.447 22:48:50 -- setup/common.sh@33 -- # echo 0 00:03:18.447 22:48:50 -- setup/common.sh@33 -- # return 0 00:03:18.447 22:48:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.447 22:48:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.447 22:48:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.447 22:48:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.447 22:48:50 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:18.447 node0=512 expecting 512 00:03:18.447 22:48:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.447 22:48:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.447 22:48:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.447 22:48:50 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:18.447 node1=512 expecting 512 00:03:18.447 22:48:50 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:18.447 00:03:18.447 real 0m3.711s 00:03:18.447 user 0m1.391s 00:03:18.447 sys 0m2.388s 00:03:18.447 22:48:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.447 22:48:50 -- common/autotest_common.sh@10 -- # set +x 00:03:18.447 ************************************ 00:03:18.447 END TEST per_node_1G_alloc 00:03:18.447 ************************************ 00:03:18.447 22:48:50 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:18.447 22:48:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:18.447 22:48:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:18.447 22:48:50 -- common/autotest_common.sh@10 -- # set +x 00:03:18.447 ************************************ 00:03:18.447 START TEST even_2G_alloc 00:03:18.447 ************************************ 00:03:18.447 22:48:50 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:18.447 22:48:50 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:18.447 22:48:50 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.447 22:48:50 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:18.447 22:48:50 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.447 22:48:50 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.447 22:48:50 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:18.447 22:48:50 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.447 22:48:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.447 22:48:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.447 22:48:50 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.447 22:48:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.447 22:48:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.447 22:48:50 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.447 22:48:50 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:18.447 22:48:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.447 22:48:50 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:18.447 22:48:50 -- setup/hugepages.sh@83 -- # : 512 00:03:18.447 22:48:50 -- setup/hugepages.sh@84 -- # : 1 00:03:18.447 22:48:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.447 22:48:50 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:18.447 22:48:50 -- setup/hugepages.sh@83 -- # : 0 00:03:18.447 22:48:50 -- setup/hugepages.sh@84 -- # : 0 00:03:18.447 22:48:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.447 22:48:50 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:18.447 22:48:50 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:18.447 22:48:50 -- setup/hugepages.sh@153 -- # setup output 00:03:18.447 22:48:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.447 22:48:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.833 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.833 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.833 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.833 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.834 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.834 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.834 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.834 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.834 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.834 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.834 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.834 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.834 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.834 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.834 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.834 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.834 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.834 22:48:54 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:21.834 22:48:54 -- setup/hugepages.sh@89 -- # local node 00:03:21.834 22:48:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.834 22:48:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.834 22:48:54 -- setup/hugepages.sh@92 -- # local surp 00:03:21.834 22:48:54 -- setup/hugepages.sh@93 -- # local resv 00:03:21.834 22:48:54 -- setup/hugepages.sh@94 -- # local anon 00:03:21.834 22:48:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.834 22:48:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.834 22:48:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.834 22:48:54 -- setup/common.sh@18 -- # local node= 00:03:21.834 22:48:54 -- setup/common.sh@19 -- # local var val 00:03:21.834 22:48:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.834 22:48:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.834 22:48:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.834 22:48:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.834 22:48:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.834 22:48:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42397524 kB' 'MemAvailable: 46317520 kB' 'Buffers: 2704 kB' 'Cached: 11820300 kB' 'SwapCached: 0 kB' 'Active: 8682968 kB' 'Inactive: 3676228 kB' 'Active(anon): 8293104 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539028 kB' 'Mapped: 173820 kB' 'Shmem: 7756912 kB' 'KReclaimable: 501396 kB' 'Slab: 1135948 kB' 'SReclaimable: 501396 kB' 'SUnreclaim: 634552 kB' 'KernelStack: 22208 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9707668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216772 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.834 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.834 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.835 22:48:54 -- setup/common.sh@33 -- # echo 0 00:03:21.835 22:48:54 -- setup/common.sh@33 -- # return 0 00:03:21.835 22:48:54 -- setup/hugepages.sh@97 -- # anon=0 00:03:21.835 22:48:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.835 22:48:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.835 22:48:54 -- setup/common.sh@18 -- # local node= 00:03:21.835 22:48:54 -- setup/common.sh@19 -- # local var val 00:03:21.835 22:48:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.835 22:48:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.835 22:48:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.835 22:48:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.835 22:48:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.835 22:48:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42398280 kB' 'MemAvailable: 46318276 kB' 'Buffers: 2704 kB' 'Cached: 11820304 kB' 'SwapCached: 0 kB' 'Active: 8683280 kB' 'Inactive: 3676228 kB' 'Active(anon): 8293416 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539360 kB' 'Mapped: 173820 kB' 'Shmem: 7756916 kB' 'KReclaimable: 501396 kB' 'Slab: 1135932 kB' 'SReclaimable: 501396 kB' 'SUnreclaim: 634536 kB' 'KernelStack: 22192 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9707680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216724 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.835 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.835 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.836 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.836 22:48:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.836 22:48:54 -- setup/common.sh@33 -- # echo 0 00:03:21.836 22:48:54 -- setup/common.sh@33 -- # return 0 00:03:21.836 22:48:54 -- setup/hugepages.sh@99 -- # surp=0 00:03:21.836 22:48:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.836 22:48:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.836 22:48:54 -- setup/common.sh@18 -- # local node= 00:03:21.836 22:48:54 -- setup/common.sh@19 -- # local var val 00:03:21.836 22:48:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.836 22:48:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.836 22:48:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.836 22:48:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.836 22:48:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.836 22:48:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.837 22:48:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42398636 kB' 'MemAvailable: 46318632 kB' 'Buffers: 2704 kB' 'Cached: 11820316 kB' 'SwapCached: 0 kB' 'Active: 8682228 kB' 'Inactive: 3676228 kB' 'Active(anon): 8292364 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538752 kB' 'Mapped: 173720 kB' 'Shmem: 7756928 kB' 'KReclaimable: 501396 kB' 'Slab: 1135920 kB' 'SReclaimable: 501396 kB' 'SUnreclaim: 634524 kB' 'KernelStack: 22192 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9707696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216724 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.837 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.837 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.838 22:48:54 -- setup/common.sh@33 -- # echo 0 00:03:21.838 22:48:54 -- setup/common.sh@33 -- # return 0 00:03:21.838 22:48:54 -- setup/hugepages.sh@100 -- # resv=0 00:03:21.838 22:48:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:21.838 nr_hugepages=1024 00:03:21.838 22:48:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.838 resv_hugepages=0 00:03:21.838 22:48:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.838 surplus_hugepages=0 00:03:21.838 22:48:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.838 anon_hugepages=0 00:03:21.838 22:48:54 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.838 22:48:54 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:21.838 22:48:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.838 22:48:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.838 22:48:54 -- setup/common.sh@18 -- # local node= 00:03:21.838 22:48:54 -- setup/common.sh@19 -- # local var val 00:03:21.838 22:48:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.838 22:48:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.838 22:48:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.838 22:48:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.838 22:48:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.838 22:48:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42398636 kB' 'MemAvailable: 46318632 kB' 'Buffers: 2704 kB' 'Cached: 11820328 kB' 'SwapCached: 0 kB' 'Active: 8681852 kB' 'Inactive: 3676228 kB' 'Active(anon): 8291988 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538352 kB' 'Mapped: 173720 kB' 'Shmem: 7756940 kB' 'KReclaimable: 501396 kB' 'Slab: 1135920 kB' 'SReclaimable: 501396 kB' 'SUnreclaim: 634524 kB' 'KernelStack: 22176 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9707708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216724 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.838 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.838 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.839 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.839 22:48:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.839 22:48:54 -- setup/common.sh@33 -- # echo 1024 00:03:21.839 22:48:54 -- setup/common.sh@33 -- # return 0 00:03:21.839 22:48:54 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.839 22:48:54 -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.839 22:48:54 -- setup/hugepages.sh@27 -- # local node 00:03:21.839 22:48:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.839 22:48:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.839 22:48:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.840 22:48:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.840 22:48:54 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.840 22:48:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.840 22:48:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.840 22:48:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.840 22:48:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.840 22:48:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.840 22:48:54 -- setup/common.sh@18 -- # local node=0 00:03:21.840 22:48:54 -- setup/common.sh@19 -- # local var val 00:03:21.840 22:48:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.840 22:48:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.840 22:48:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.840 22:48:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.840 22:48:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.840 22:48:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 27413232 kB' 'MemUsed: 5178852 kB' 'SwapCached: 0 kB' 'Active: 2297988 kB' 'Inactive: 275656 kB' 'Active(anon): 2138136 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 275656 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2420152 kB' 'Mapped: 78784 kB' 'AnonPages: 156624 kB' 'Shmem: 1984644 kB' 'KernelStack: 12808 kB' 'PageTables: 3516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 161536 kB' 'Slab: 438692 kB' 'SReclaimable: 161536 kB' 'SUnreclaim: 277156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.840 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.840 22:48:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@33 -- # echo 0 00:03:21.841 22:48:54 -- setup/common.sh@33 -- # return 0 00:03:21.841 22:48:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.841 22:48:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.841 22:48:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.841 22:48:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:21.841 22:48:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.841 22:48:54 -- setup/common.sh@18 -- # local node=1 00:03:21.841 22:48:54 -- setup/common.sh@19 -- # local var val 00:03:21.841 22:48:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.841 22:48:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.841 22:48:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:21.841 22:48:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:21.841 22:48:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.841 22:48:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 14985152 kB' 'MemUsed: 12717956 kB' 'SwapCached: 0 kB' 'Active: 6384256 kB' 'Inactive: 3400572 kB' 'Active(anon): 6154244 kB' 'Inactive(anon): 0 kB' 'Active(file): 230012 kB' 'Inactive(file): 3400572 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9402896 kB' 'Mapped: 94936 kB' 'AnonPages: 382128 kB' 'Shmem: 5772312 kB' 'KernelStack: 9384 kB' 'PageTables: 4804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 339860 kB' 'Slab: 697228 kB' 'SReclaimable: 339860 kB' 'SUnreclaim: 357368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.841 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.841 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.842 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.842 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.842 22:48:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.842 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.842 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.842 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.842 22:48:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.842 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.842 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.842 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.842 22:48:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.842 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.842 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.842 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.842 22:48:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.842 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.842 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.842 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.842 22:48:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.842 22:48:54 -- setup/common.sh@32 -- # continue 00:03:21.842 22:48:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.842 22:48:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.842 22:48:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.842 22:48:54 -- setup/common.sh@33 -- # echo 0 00:03:21.842 22:48:54 -- setup/common.sh@33 -- # return 0 00:03:21.842 22:48:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.842 22:48:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.842 22:48:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.842 22:48:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.842 22:48:54 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:21.842 node0=512 expecting 512 00:03:21.842 22:48:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.842 22:48:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.842 22:48:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.842 22:48:54 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:21.842 node1=512 expecting 512 00:03:21.842 22:48:54 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:21.842 00:03:21.842 real 0m3.570s 00:03:21.842 user 0m1.382s 00:03:21.842 sys 0m2.258s 00:03:21.842 22:48:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.842 22:48:54 -- common/autotest_common.sh@10 -- # set +x 00:03:21.842 ************************************ 00:03:21.842 END TEST even_2G_alloc 00:03:21.842 ************************************ 00:03:21.842 22:48:54 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:21.842 22:48:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:21.842 22:48:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:21.842 22:48:54 -- common/autotest_common.sh@10 -- # set +x 00:03:21.842 ************************************ 00:03:21.842 START TEST odd_alloc 00:03:21.842 ************************************ 00:03:21.842 22:48:54 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:21.842 22:48:54 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:22.102 22:48:54 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:22.102 22:48:54 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:22.102 22:48:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.102 22:48:54 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:22.102 22:48:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:22.102 22:48:54 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:22.102 22:48:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.102 22:48:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:22.102 22:48:54 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.102 22:48:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.102 22:48:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.102 22:48:54 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:22.102 22:48:54 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:22.102 22:48:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.102 22:48:54 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:22.102 22:48:54 -- setup/hugepages.sh@83 -- # : 513 00:03:22.102 22:48:54 -- setup/hugepages.sh@84 -- # : 1 00:03:22.102 22:48:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.102 22:48:54 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:22.102 22:48:54 -- setup/hugepages.sh@83 -- # : 0 00:03:22.102 22:48:54 -- setup/hugepages.sh@84 -- # : 0 00:03:22.102 22:48:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.102 22:48:54 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:22.102 22:48:54 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:22.102 22:48:54 -- setup/hugepages.sh@160 -- # setup output 00:03:22.102 22:48:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.102 22:48:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.397 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:25.397 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:25.397 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:25.397 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:25.398 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:25.398 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:25.398 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:25.398 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:25.398 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:25.398 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:25.398 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:25.398 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:25.398 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:25.398 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:25.398 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:25.398 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:25.398 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.398 22:48:57 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:25.398 22:48:57 -- setup/hugepages.sh@89 -- # local node 00:03:25.398 22:48:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.398 22:48:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.398 22:48:57 -- setup/hugepages.sh@92 -- # local surp 00:03:25.398 22:48:57 -- setup/hugepages.sh@93 -- # local resv 00:03:25.398 22:48:57 -- setup/hugepages.sh@94 -- # local anon 00:03:25.398 22:48:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.398 22:48:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.398 22:48:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.398 22:48:57 -- setup/common.sh@18 -- # local node= 00:03:25.398 22:48:57 -- setup/common.sh@19 -- # local var val 00:03:25.398 22:48:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.398 22:48:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.398 22:48:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.398 22:48:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.398 22:48:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.398 22:48:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42415132 kB' 'MemAvailable: 46335128 kB' 'Buffers: 2704 kB' 'Cached: 11820428 kB' 'SwapCached: 0 kB' 'Active: 8683580 kB' 'Inactive: 3676228 kB' 'Active(anon): 8293716 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540192 kB' 'Mapped: 173728 kB' 'Shmem: 7757040 kB' 'KReclaimable: 501396 kB' 'Slab: 1135484 kB' 'SReclaimable: 501396 kB' 'SUnreclaim: 634088 kB' 'KernelStack: 22288 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 9712868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216804 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.398 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.398 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.399 22:48:57 -- setup/common.sh@33 -- # echo 0 00:03:25.399 22:48:57 -- setup/common.sh@33 -- # return 0 00:03:25.399 22:48:57 -- setup/hugepages.sh@97 -- # anon=0 00:03:25.399 22:48:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.399 22:48:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.399 22:48:57 -- setup/common.sh@18 -- # local node= 00:03:25.399 22:48:57 -- setup/common.sh@19 -- # local var val 00:03:25.399 22:48:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.399 22:48:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.399 22:48:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.399 22:48:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.399 22:48:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.399 22:48:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42415772 kB' 'MemAvailable: 46335768 kB' 'Buffers: 2704 kB' 'Cached: 11820432 kB' 'SwapCached: 0 kB' 'Active: 8683640 kB' 'Inactive: 3676228 kB' 'Active(anon): 8293776 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540168 kB' 'Mapped: 173724 kB' 'Shmem: 7757044 kB' 'KReclaimable: 501396 kB' 'Slab: 1135520 kB' 'SReclaimable: 501396 kB' 'SUnreclaim: 634124 kB' 'KernelStack: 22288 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 9711364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216836 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.399 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.399 22:48:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.400 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.400 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.401 22:48:57 -- setup/common.sh@33 -- # echo 0 00:03:25.401 22:48:57 -- setup/common.sh@33 -- # return 0 00:03:25.401 22:48:57 -- setup/hugepages.sh@99 -- # surp=0 00:03:25.401 22:48:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.401 22:48:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.401 22:48:57 -- setup/common.sh@18 -- # local node= 00:03:25.401 22:48:57 -- setup/common.sh@19 -- # local var val 00:03:25.401 22:48:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.401 22:48:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.401 22:48:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.401 22:48:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.401 22:48:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.401 22:48:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42415120 kB' 'MemAvailable: 46335116 kB' 'Buffers: 2704 kB' 'Cached: 11820444 kB' 'SwapCached: 0 kB' 'Active: 8683528 kB' 'Inactive: 3676228 kB' 'Active(anon): 8293664 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540012 kB' 'Mapped: 173724 kB' 'Shmem: 7757056 kB' 'KReclaimable: 501396 kB' 'Slab: 1135520 kB' 'SReclaimable: 501396 kB' 'SUnreclaim: 634124 kB' 'KernelStack: 22240 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 9711380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216740 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.401 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.401 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.664 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.664 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.664 22:48:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.664 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.664 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.664 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.664 22:48:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.665 22:48:57 -- setup/common.sh@33 -- # echo 0 00:03:25.665 22:48:57 -- setup/common.sh@33 -- # return 0 00:03:25.665 22:48:57 -- setup/hugepages.sh@100 -- # resv=0 00:03:25.665 22:48:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:25.665 nr_hugepages=1025 00:03:25.665 22:48:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.665 resv_hugepages=0 00:03:25.665 22:48:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.665 surplus_hugepages=0 00:03:25.665 22:48:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.665 anon_hugepages=0 00:03:25.665 22:48:57 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:25.665 22:48:57 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:25.665 22:48:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.665 22:48:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.665 22:48:57 -- setup/common.sh@18 -- # local node= 00:03:25.665 22:48:57 -- setup/common.sh@19 -- # local var val 00:03:25.665 22:48:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.665 22:48:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.665 22:48:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.665 22:48:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.665 22:48:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.665 22:48:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42417076 kB' 'MemAvailable: 46337072 kB' 'Buffers: 2704 kB' 'Cached: 11820456 kB' 'SwapCached: 0 kB' 'Active: 8683684 kB' 'Inactive: 3676228 kB' 'Active(anon): 8293820 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540168 kB' 'Mapped: 173724 kB' 'Shmem: 7757068 kB' 'KReclaimable: 501396 kB' 'Slab: 1135520 kB' 'SReclaimable: 501396 kB' 'SUnreclaim: 634124 kB' 'KernelStack: 22304 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 9712664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216820 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.665 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.665 22:48:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.666 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.666 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.667 22:48:57 -- setup/common.sh@33 -- # echo 1025 00:03:25.667 22:48:57 -- setup/common.sh@33 -- # return 0 00:03:25.667 22:48:57 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:25.667 22:48:57 -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.667 22:48:57 -- setup/hugepages.sh@27 -- # local node 00:03:25.667 22:48:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.667 22:48:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.667 22:48:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.667 22:48:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:25.667 22:48:57 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.667 22:48:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.667 22:48:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.667 22:48:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.667 22:48:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.667 22:48:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.667 22:48:57 -- setup/common.sh@18 -- # local node=0 00:03:25.667 22:48:57 -- setup/common.sh@19 -- # local var val 00:03:25.667 22:48:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.667 22:48:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.667 22:48:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.667 22:48:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.667 22:48:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.667 22:48:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 27432984 kB' 'MemUsed: 5159100 kB' 'SwapCached: 0 kB' 'Active: 2298924 kB' 'Inactive: 275656 kB' 'Active(anon): 2139072 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 275656 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2420240 kB' 'Mapped: 78788 kB' 'AnonPages: 157532 kB' 'Shmem: 1984732 kB' 'KernelStack: 12840 kB' 'PageTables: 3612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 161536 kB' 'Slab: 438384 kB' 'SReclaimable: 161536 kB' 'SUnreclaim: 276848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.667 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.667 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@33 -- # echo 0 00:03:25.668 22:48:57 -- setup/common.sh@33 -- # return 0 00:03:25.668 22:48:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.668 22:48:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.668 22:48:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.668 22:48:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:25.668 22:48:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.668 22:48:57 -- setup/common.sh@18 -- # local node=1 00:03:25.668 22:48:57 -- setup/common.sh@19 -- # local var val 00:03:25.668 22:48:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.668 22:48:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.668 22:48:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:25.668 22:48:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:25.668 22:48:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.668 22:48:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 14985020 kB' 'MemUsed: 12718088 kB' 'SwapCached: 0 kB' 'Active: 6384752 kB' 'Inactive: 3400572 kB' 'Active(anon): 6154740 kB' 'Inactive(anon): 0 kB' 'Active(file): 230012 kB' 'Inactive(file): 3400572 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9402920 kB' 'Mapped: 94936 kB' 'AnonPages: 382592 kB' 'Shmem: 5772336 kB' 'KernelStack: 9416 kB' 'PageTables: 4660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 339860 kB' 'Slab: 697136 kB' 'SReclaimable: 339860 kB' 'SUnreclaim: 357276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.668 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.668 22:48:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # continue 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.669 22:48:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.669 22:48:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.669 22:48:57 -- setup/common.sh@33 -- # echo 0 00:03:25.669 22:48:57 -- setup/common.sh@33 -- # return 0 00:03:25.669 22:48:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.669 22:48:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.669 22:48:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.669 22:48:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.669 22:48:57 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:25.669 node0=512 expecting 513 00:03:25.669 22:48:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.669 22:48:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.669 22:48:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.669 22:48:57 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:25.669 node1=513 expecting 512 00:03:25.669 22:48:57 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:25.669 00:03:25.669 real 0m3.662s 00:03:25.669 user 0m1.409s 00:03:25.669 sys 0m2.325s 00:03:25.669 22:48:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.669 22:48:57 -- common/autotest_common.sh@10 -- # set +x 00:03:25.669 ************************************ 00:03:25.669 END TEST odd_alloc 00:03:25.669 ************************************ 00:03:25.669 22:48:57 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:25.669 22:48:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:25.669 22:48:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:25.669 22:48:57 -- common/autotest_common.sh@10 -- # set +x 00:03:25.669 ************************************ 00:03:25.669 START TEST custom_alloc 00:03:25.669 ************************************ 00:03:25.669 22:48:57 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:25.669 22:48:57 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:25.669 22:48:57 -- setup/hugepages.sh@169 -- # local node 00:03:25.669 22:48:57 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:25.669 22:48:57 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:25.669 22:48:57 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:25.669 22:48:57 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:25.669 22:48:57 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:25.669 22:48:57 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:25.669 22:48:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.669 22:48:57 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:25.669 22:48:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:25.669 22:48:57 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.669 22:48:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.669 22:48:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:25.669 22:48:57 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.669 22:48:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.669 22:48:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.669 22:48:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.669 22:48:57 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:25.669 22:48:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.669 22:48:57 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:25.669 22:48:57 -- setup/hugepages.sh@83 -- # : 256 00:03:25.669 22:48:57 -- setup/hugepages.sh@84 -- # : 1 00:03:25.669 22:48:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.669 22:48:57 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:25.669 22:48:57 -- setup/hugepages.sh@83 -- # : 0 00:03:25.669 22:48:57 -- setup/hugepages.sh@84 -- # : 0 00:03:25.669 22:48:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.669 22:48:57 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:25.669 22:48:57 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:25.669 22:48:57 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:25.669 22:48:57 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:25.669 22:48:57 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:25.669 22:48:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.669 22:48:57 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:25.669 22:48:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:25.669 22:48:57 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.669 22:48:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.669 22:48:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.669 22:48:57 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.669 22:48:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.669 22:48:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.669 22:48:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.669 22:48:57 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:25.669 22:48:57 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.669 22:48:57 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:25.669 22:48:57 -- setup/hugepages.sh@78 -- # return 0 00:03:25.669 22:48:57 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:25.670 22:48:57 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:25.670 22:48:57 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:25.670 22:48:57 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:25.670 22:48:57 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:25.670 22:48:57 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:25.670 22:48:57 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:25.670 22:48:57 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:25.670 22:48:57 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.670 22:48:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.670 22:48:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.670 22:48:57 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.670 22:48:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.670 22:48:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.670 22:48:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.670 22:48:57 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:25.670 22:48:57 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.670 22:48:57 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:25.670 22:48:57 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.670 22:48:57 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:25.670 22:48:57 -- setup/hugepages.sh@78 -- # return 0 00:03:25.670 22:48:57 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:25.670 22:48:57 -- setup/hugepages.sh@187 -- # setup output 00:03:25.670 22:48:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.670 22:48:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.961 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:28.961 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:28.961 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:28.961 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:28.961 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:28.961 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:28.961 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:28.961 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:28.961 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:28.961 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:28.962 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:28.962 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:28.962 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:28.962 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:28.962 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:28.962 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:28.962 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:29.225 22:49:01 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:29.225 22:49:01 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:29.225 22:49:01 -- setup/hugepages.sh@89 -- # local node 00:03:29.225 22:49:01 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.225 22:49:01 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.225 22:49:01 -- setup/hugepages.sh@92 -- # local surp 00:03:29.225 22:49:01 -- setup/hugepages.sh@93 -- # local resv 00:03:29.225 22:49:01 -- setup/hugepages.sh@94 -- # local anon 00:03:29.225 22:49:01 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.225 22:49:01 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.225 22:49:01 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.225 22:49:01 -- setup/common.sh@18 -- # local node= 00:03:29.225 22:49:01 -- setup/common.sh@19 -- # local var val 00:03:29.225 22:49:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.225 22:49:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.225 22:49:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.225 22:49:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.225 22:49:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.225 22:49:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.225 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.225 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.225 22:49:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 41361876 kB' 'MemAvailable: 45281872 kB' 'Buffers: 2704 kB' 'Cached: 11820564 kB' 'SwapCached: 0 kB' 'Active: 8684296 kB' 'Inactive: 3676228 kB' 'Active(anon): 8294432 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540100 kB' 'Mapped: 173600 kB' 'Shmem: 7757176 kB' 'KReclaimable: 501396 kB' 'Slab: 1135304 kB' 'SReclaimable: 501396 kB' 'SUnreclaim: 633908 kB' 'KernelStack: 22304 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 9709136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216660 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:29.225 22:49:01 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.225 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.225 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.225 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.225 22:49:01 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.225 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.225 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.225 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.225 22:49:01 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.225 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.225 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.225 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.225 22:49:01 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.225 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.225 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.225 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.226 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.226 22:49:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.226 22:49:01 -- setup/common.sh@33 -- # echo 0 00:03:29.226 22:49:01 -- setup/common.sh@33 -- # return 0 00:03:29.226 22:49:01 -- setup/hugepages.sh@97 -- # anon=0 00:03:29.226 22:49:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.226 22:49:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.226 22:49:01 -- setup/common.sh@18 -- # local node= 00:03:29.226 22:49:01 -- setup/common.sh@19 -- # local var val 00:03:29.226 22:49:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.226 22:49:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.226 22:49:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.226 22:49:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.227 22:49:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.227 22:49:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 41360588 kB' 'MemAvailable: 45280584 kB' 'Buffers: 2704 kB' 'Cached: 11820568 kB' 'SwapCached: 0 kB' 'Active: 8684044 kB' 'Inactive: 3676228 kB' 'Active(anon): 8294180 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540340 kB' 'Mapped: 173484 kB' 'Shmem: 7757180 kB' 'KReclaimable: 501396 kB' 'Slab: 1135312 kB' 'SReclaimable: 501396 kB' 'SUnreclaim: 633916 kB' 'KernelStack: 22304 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 9709148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216660 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.227 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.227 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.228 22:49:01 -- setup/common.sh@33 -- # echo 0 00:03:29.228 22:49:01 -- setup/common.sh@33 -- # return 0 00:03:29.228 22:49:01 -- setup/hugepages.sh@99 -- # surp=0 00:03:29.228 22:49:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.228 22:49:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.228 22:49:01 -- setup/common.sh@18 -- # local node= 00:03:29.228 22:49:01 -- setup/common.sh@19 -- # local var val 00:03:29.228 22:49:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.228 22:49:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.228 22:49:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.228 22:49:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.228 22:49:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.228 22:49:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 41360588 kB' 'MemAvailable: 45280584 kB' 'Buffers: 2704 kB' 'Cached: 11820568 kB' 'SwapCached: 0 kB' 'Active: 8684044 kB' 'Inactive: 3676228 kB' 'Active(anon): 8294180 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540340 kB' 'Mapped: 173484 kB' 'Shmem: 7757180 kB' 'KReclaimable: 501396 kB' 'Slab: 1135312 kB' 'SReclaimable: 501396 kB' 'SUnreclaim: 633916 kB' 'KernelStack: 22304 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 9709164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216660 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.228 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.228 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.229 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.229 22:49:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.229 22:49:01 -- setup/common.sh@33 -- # echo 0 00:03:29.229 22:49:01 -- setup/common.sh@33 -- # return 0 00:03:29.229 22:49:01 -- setup/hugepages.sh@100 -- # resv=0 00:03:29.229 22:49:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:29.229 nr_hugepages=1536 00:03:29.229 22:49:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.229 resv_hugepages=0 00:03:29.229 22:49:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.229 surplus_hugepages=0 00:03:29.229 22:49:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.229 anon_hugepages=0 00:03:29.230 22:49:01 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:29.230 22:49:01 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:29.230 22:49:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.230 22:49:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.230 22:49:01 -- setup/common.sh@18 -- # local node= 00:03:29.230 22:49:01 -- setup/common.sh@19 -- # local var val 00:03:29.230 22:49:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.230 22:49:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.230 22:49:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.230 22:49:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.230 22:49:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.230 22:49:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.230 22:49:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 41360912 kB' 'MemAvailable: 45280908 kB' 'Buffers: 2704 kB' 'Cached: 11820604 kB' 'SwapCached: 0 kB' 'Active: 8683732 kB' 'Inactive: 3676228 kB' 'Active(anon): 8293868 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539980 kB' 'Mapped: 173484 kB' 'Shmem: 7757216 kB' 'KReclaimable: 501396 kB' 'Slab: 1135312 kB' 'SReclaimable: 501396 kB' 'SUnreclaim: 633916 kB' 'KernelStack: 22288 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 9709180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216660 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.230 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.231 22:49:01 -- setup/common.sh@33 -- # echo 1536 00:03:29.231 22:49:01 -- setup/common.sh@33 -- # return 0 00:03:29.231 22:49:01 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:29.231 22:49:01 -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.231 22:49:01 -- setup/hugepages.sh@27 -- # local node 00:03:29.231 22:49:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.231 22:49:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:29.231 22:49:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.231 22:49:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:29.231 22:49:01 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.231 22:49:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.231 22:49:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.231 22:49:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.231 22:49:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.231 22:49:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.231 22:49:01 -- setup/common.sh@18 -- # local node=0 00:03:29.231 22:49:01 -- setup/common.sh@19 -- # local var val 00:03:29.231 22:49:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.231 22:49:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.231 22:49:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.231 22:49:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.231 22:49:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.231 22:49:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 27423632 kB' 'MemUsed: 5168452 kB' 'SwapCached: 0 kB' 'Active: 2299384 kB' 'Inactive: 275656 kB' 'Active(anon): 2139532 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 275656 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2420344 kB' 'Mapped: 78792 kB' 'AnonPages: 157472 kB' 'Shmem: 1984836 kB' 'KernelStack: 12904 kB' 'PageTables: 3732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 161536 kB' 'Slab: 437772 kB' 'SReclaimable: 161536 kB' 'SUnreclaim: 276236 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.231 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.231 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@33 -- # echo 0 00:03:29.232 22:49:01 -- setup/common.sh@33 -- # return 0 00:03:29.232 22:49:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.232 22:49:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.232 22:49:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.232 22:49:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:29.232 22:49:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.232 22:49:01 -- setup/common.sh@18 -- # local node=1 00:03:29.232 22:49:01 -- setup/common.sh@19 -- # local var val 00:03:29.232 22:49:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.232 22:49:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.232 22:49:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:29.232 22:49:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:29.232 22:49:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.232 22:49:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 13937280 kB' 'MemUsed: 13765828 kB' 'SwapCached: 0 kB' 'Active: 6384748 kB' 'Inactive: 3400572 kB' 'Active(anon): 6154736 kB' 'Inactive(anon): 0 kB' 'Active(file): 230012 kB' 'Inactive(file): 3400572 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9402964 kB' 'Mapped: 94692 kB' 'AnonPages: 382408 kB' 'Shmem: 5772380 kB' 'KernelStack: 9384 kB' 'PageTables: 4812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 339860 kB' 'Slab: 697540 kB' 'SReclaimable: 339860 kB' 'SUnreclaim: 357680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 22:49:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # continue 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 22:49:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 22:49:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 22:49:01 -- setup/common.sh@33 -- # echo 0 00:03:29.233 22:49:01 -- setup/common.sh@33 -- # return 0 00:03:29.233 22:49:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.233 22:49:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.233 22:49:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.233 22:49:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.233 22:49:01 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:29.233 node0=512 expecting 512 00:03:29.233 22:49:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.233 22:49:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.233 22:49:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.233 22:49:01 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:29.233 node1=1024 expecting 1024 00:03:29.233 22:49:01 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:29.233 00:03:29.233 real 0m3.631s 00:03:29.233 user 0m1.375s 00:03:29.233 sys 0m2.327s 00:03:29.233 22:49:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.233 22:49:01 -- common/autotest_common.sh@10 -- # set +x 00:03:29.233 ************************************ 00:03:29.234 END TEST custom_alloc 00:03:29.234 ************************************ 00:03:29.234 22:49:01 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:29.234 22:49:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:29.234 22:49:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:29.234 22:49:01 -- common/autotest_common.sh@10 -- # set +x 00:03:29.234 ************************************ 00:03:29.234 START TEST no_shrink_alloc 00:03:29.234 ************************************ 00:03:29.234 22:49:01 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:29.234 22:49:01 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:29.234 22:49:01 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:29.234 22:49:01 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:29.234 22:49:01 -- setup/hugepages.sh@51 -- # shift 00:03:29.493 22:49:01 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:29.493 22:49:01 -- setup/hugepages.sh@52 -- # local node_ids 00:03:29.493 22:49:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:29.493 22:49:01 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:29.493 22:49:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:29.493 22:49:01 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:29.493 22:49:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.493 22:49:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:29.493 22:49:01 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:29.493 22:49:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.493 22:49:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.493 22:49:01 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:29.493 22:49:01 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:29.493 22:49:01 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:29.493 22:49:01 -- setup/hugepages.sh@73 -- # return 0 00:03:29.493 22:49:01 -- setup/hugepages.sh@198 -- # setup output 00:03:29.493 22:49:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.493 22:49:01 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.787 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.787 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.787 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.787 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.787 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.787 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.787 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.787 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.787 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.787 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.787 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.787 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.787 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.787 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.787 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.787 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.787 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:32.787 22:49:04 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:32.787 22:49:04 -- setup/hugepages.sh@89 -- # local node 00:03:32.787 22:49:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.787 22:49:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.787 22:49:04 -- setup/hugepages.sh@92 -- # local surp 00:03:32.787 22:49:04 -- setup/hugepages.sh@93 -- # local resv 00:03:32.787 22:49:04 -- setup/hugepages.sh@94 -- # local anon 00:03:32.787 22:49:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.787 22:49:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.787 22:49:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.787 22:49:04 -- setup/common.sh@18 -- # local node= 00:03:32.787 22:49:04 -- setup/common.sh@19 -- # local var val 00:03:32.787 22:49:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.787 22:49:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.787 22:49:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.787 22:49:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.787 22:49:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.787 22:49:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 22:49:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42416200 kB' 'MemAvailable: 46336196 kB' 'Buffers: 2704 kB' 'Cached: 11820688 kB' 'SwapCached: 0 kB' 'Active: 8685928 kB' 'Inactive: 3676228 kB' 'Active(anon): 8296064 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541540 kB' 'Mapped: 173856 kB' 'Shmem: 7757300 kB' 'KReclaimable: 501396 kB' 'Slab: 1135444 kB' 'SReclaimable: 501396 kB' 'SUnreclaim: 634048 kB' 'KernelStack: 22208 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9709784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216740 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 22:49:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.788 22:49:04 -- setup/common.sh@33 -- # echo 0 00:03:32.788 22:49:04 -- setup/common.sh@33 -- # return 0 00:03:32.788 22:49:04 -- setup/hugepages.sh@97 -- # anon=0 00:03:32.788 22:49:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.788 22:49:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.788 22:49:04 -- setup/common.sh@18 -- # local node= 00:03:32.788 22:49:04 -- setup/common.sh@19 -- # local var val 00:03:32.788 22:49:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.788 22:49:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.788 22:49:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.788 22:49:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.788 22:49:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.788 22:49:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42417292 kB' 'MemAvailable: 46337288 kB' 'Buffers: 2704 kB' 'Cached: 11820692 kB' 'SwapCached: 0 kB' 'Active: 8685116 kB' 'Inactive: 3676228 kB' 'Active(anon): 8295252 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541240 kB' 'Mapped: 173740 kB' 'Shmem: 7757304 kB' 'KReclaimable: 501396 kB' 'Slab: 1135440 kB' 'SReclaimable: 501396 kB' 'SUnreclaim: 634044 kB' 'KernelStack: 22208 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9709796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216708 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.788 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.788 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.789 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.789 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.790 22:49:04 -- setup/common.sh@33 -- # echo 0 00:03:32.790 22:49:04 -- setup/common.sh@33 -- # return 0 00:03:32.790 22:49:04 -- setup/hugepages.sh@99 -- # surp=0 00:03:32.790 22:49:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.790 22:49:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.790 22:49:04 -- setup/common.sh@18 -- # local node= 00:03:32.790 22:49:04 -- setup/common.sh@19 -- # local var val 00:03:32.790 22:49:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.790 22:49:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.790 22:49:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.790 22:49:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.790 22:49:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.790 22:49:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42417336 kB' 'MemAvailable: 46337080 kB' 'Buffers: 2704 kB' 'Cached: 11820692 kB' 'SwapCached: 0 kB' 'Active: 8685116 kB' 'Inactive: 3676228 kB' 'Active(anon): 8295252 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540924 kB' 'Mapped: 173740 kB' 'Shmem: 7757304 kB' 'KReclaimable: 501396 kB' 'Slab: 1135440 kB' 'SReclaimable: 501396 kB' 'SUnreclaim: 634044 kB' 'KernelStack: 22192 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9709812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216708 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.790 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.790 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.791 22:49:04 -- setup/common.sh@33 -- # echo 0 00:03:32.791 22:49:04 -- setup/common.sh@33 -- # return 0 00:03:32.791 22:49:04 -- setup/hugepages.sh@100 -- # resv=0 00:03:32.791 22:49:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:32.791 nr_hugepages=1024 00:03:32.791 22:49:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.791 resv_hugepages=0 00:03:32.791 22:49:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.791 surplus_hugepages=0 00:03:32.791 22:49:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.791 anon_hugepages=0 00:03:32.791 22:49:04 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.791 22:49:04 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:32.791 22:49:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.791 22:49:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.791 22:49:04 -- setup/common.sh@18 -- # local node= 00:03:32.791 22:49:04 -- setup/common.sh@19 -- # local var val 00:03:32.791 22:49:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.791 22:49:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.791 22:49:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.791 22:49:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.791 22:49:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.791 22:49:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42416380 kB' 'MemAvailable: 46336376 kB' 'Buffers: 2704 kB' 'Cached: 11820728 kB' 'SwapCached: 0 kB' 'Active: 8684848 kB' 'Inactive: 3676228 kB' 'Active(anon): 8294984 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540888 kB' 'Mapped: 173740 kB' 'Shmem: 7757340 kB' 'KReclaimable: 501396 kB' 'Slab: 1135440 kB' 'SReclaimable: 501396 kB' 'SUnreclaim: 634044 kB' 'KernelStack: 22192 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9709828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216708 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.791 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.791 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.791 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.791 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.791 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.791 22:49:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.792 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.792 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.793 22:49:05 -- setup/common.sh@33 -- # echo 1024 00:03:32.793 22:49:05 -- setup/common.sh@33 -- # return 0 00:03:32.793 22:49:05 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.793 22:49:05 -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.793 22:49:05 -- setup/hugepages.sh@27 -- # local node 00:03:32.793 22:49:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.793 22:49:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:32.793 22:49:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.793 22:49:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:32.793 22:49:05 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.793 22:49:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.793 22:49:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.793 22:49:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.793 22:49:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.793 22:49:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.793 22:49:05 -- setup/common.sh@18 -- # local node=0 00:03:32.793 22:49:05 -- setup/common.sh@19 -- # local var val 00:03:32.793 22:49:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.793 22:49:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.793 22:49:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.793 22:49:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.793 22:49:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.793 22:49:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26366044 kB' 'MemUsed: 6226040 kB' 'SwapCached: 0 kB' 'Active: 2299436 kB' 'Inactive: 275656 kB' 'Active(anon): 2139584 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 275656 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2420432 kB' 'Mapped: 78796 kB' 'AnonPages: 157816 kB' 'Shmem: 1984924 kB' 'KernelStack: 12840 kB' 'PageTables: 3516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 161536 kB' 'Slab: 437696 kB' 'SReclaimable: 161536 kB' 'SUnreclaim: 276160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.793 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.793 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # continue 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 22:49:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 22:49:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 22:49:05 -- setup/common.sh@33 -- # echo 0 00:03:32.794 22:49:05 -- setup/common.sh@33 -- # return 0 00:03:32.794 22:49:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.794 22:49:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.794 22:49:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.794 22:49:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.794 22:49:05 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:32.794 node0=1024 expecting 1024 00:03:32.794 22:49:05 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:32.794 22:49:05 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:32.794 22:49:05 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:32.794 22:49:05 -- setup/hugepages.sh@202 -- # setup output 00:03:32.794 22:49:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.794 22:49:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:36.083 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:36.083 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:36.083 22:49:08 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:36.083 22:49:08 -- setup/hugepages.sh@89 -- # local node 00:03:36.083 22:49:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.083 22:49:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.083 22:49:08 -- setup/hugepages.sh@92 -- # local surp 00:03:36.083 22:49:08 -- setup/hugepages.sh@93 -- # local resv 00:03:36.083 22:49:08 -- setup/hugepages.sh@94 -- # local anon 00:03:36.083 22:49:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.083 22:49:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.083 22:49:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.083 22:49:08 -- setup/common.sh@18 -- # local node= 00:03:36.083 22:49:08 -- setup/common.sh@19 -- # local var val 00:03:36.083 22:49:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.083 22:49:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.083 22:49:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.083 22:49:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.083 22:49:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.083 22:49:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 22:49:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42428764 kB' 'MemAvailable: 46348728 kB' 'Buffers: 2704 kB' 'Cached: 11820804 kB' 'SwapCached: 0 kB' 'Active: 8687252 kB' 'Inactive: 3676228 kB' 'Active(anon): 8297388 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542900 kB' 'Mapped: 173832 kB' 'Shmem: 7757416 kB' 'KReclaimable: 501364 kB' 'Slab: 1135064 kB' 'SReclaimable: 501364 kB' 'SUnreclaim: 633700 kB' 'KernelStack: 22240 kB' 'PageTables: 8440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9713692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216660 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 22:49:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.084 22:49:08 -- setup/common.sh@33 -- # echo 0 00:03:36.084 22:49:08 -- setup/common.sh@33 -- # return 0 00:03:36.084 22:49:08 -- setup/hugepages.sh@97 -- # anon=0 00:03:36.084 22:49:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.084 22:49:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.084 22:49:08 -- setup/common.sh@18 -- # local node= 00:03:36.084 22:49:08 -- setup/common.sh@19 -- # local var val 00:03:36.084 22:49:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.084 22:49:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.084 22:49:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.084 22:49:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.084 22:49:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.084 22:49:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42428656 kB' 'MemAvailable: 46348620 kB' 'Buffers: 2704 kB' 'Cached: 11820808 kB' 'SwapCached: 0 kB' 'Active: 8686840 kB' 'Inactive: 3676228 kB' 'Active(anon): 8296976 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542440 kB' 'Mapped: 173820 kB' 'Shmem: 7757420 kB' 'KReclaimable: 501364 kB' 'Slab: 1135064 kB' 'SReclaimable: 501364 kB' 'SUnreclaim: 633700 kB' 'KernelStack: 22272 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9713464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216692 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.084 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 22:49:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 22:49:08 -- setup/common.sh@33 -- # echo 0 00:03:36.086 22:49:08 -- setup/common.sh@33 -- # return 0 00:03:36.086 22:49:08 -- setup/hugepages.sh@99 -- # surp=0 00:03:36.086 22:49:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.086 22:49:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.086 22:49:08 -- setup/common.sh@18 -- # local node= 00:03:36.086 22:49:08 -- setup/common.sh@19 -- # local var val 00:03:36.086 22:49:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.086 22:49:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.086 22:49:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.086 22:49:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.086 22:49:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.086 22:49:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.086 22:49:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42428776 kB' 'MemAvailable: 46348740 kB' 'Buffers: 2704 kB' 'Cached: 11820808 kB' 'SwapCached: 0 kB' 'Active: 8686700 kB' 'Inactive: 3676228 kB' 'Active(anon): 8296836 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542320 kB' 'Mapped: 173820 kB' 'Shmem: 7757420 kB' 'KReclaimable: 501364 kB' 'Slab: 1135064 kB' 'SReclaimable: 501364 kB' 'SUnreclaim: 633700 kB' 'KernelStack: 22336 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9714996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216740 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.086 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.347 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.347 22:49:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.348 22:49:08 -- setup/common.sh@33 -- # echo 0 00:03:36.348 22:49:08 -- setup/common.sh@33 -- # return 0 00:03:36.348 22:49:08 -- setup/hugepages.sh@100 -- # resv=0 00:03:36.348 22:49:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:36.348 nr_hugepages=1024 00:03:36.348 22:49:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.348 resv_hugepages=0 00:03:36.348 22:49:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.348 surplus_hugepages=0 00:03:36.348 22:49:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.348 anon_hugepages=0 00:03:36.348 22:49:08 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.348 22:49:08 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:36.348 22:49:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.348 22:49:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.348 22:49:08 -- setup/common.sh@18 -- # local node= 00:03:36.348 22:49:08 -- setup/common.sh@19 -- # local var val 00:03:36.348 22:49:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.348 22:49:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.348 22:49:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.348 22:49:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.348 22:49:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.348 22:49:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42426920 kB' 'MemAvailable: 46346884 kB' 'Buffers: 2704 kB' 'Cached: 11820836 kB' 'SwapCached: 0 kB' 'Active: 8686368 kB' 'Inactive: 3676228 kB' 'Active(anon): 8296504 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542344 kB' 'Mapped: 173736 kB' 'Shmem: 7757448 kB' 'KReclaimable: 501364 kB' 'Slab: 1135072 kB' 'SReclaimable: 501364 kB' 'SUnreclaim: 633708 kB' 'KernelStack: 22320 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9715012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216820 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3116404 kB' 'DirectMap2M: 14395392 kB' 'DirectMap1G: 51380224 kB' 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.348 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.348 22:49:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.349 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.349 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.350 22:49:08 -- setup/common.sh@33 -- # echo 1024 00:03:36.350 22:49:08 -- setup/common.sh@33 -- # return 0 00:03:36.350 22:49:08 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.350 22:49:08 -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.350 22:49:08 -- setup/hugepages.sh@27 -- # local node 00:03:36.350 22:49:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.350 22:49:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:36.350 22:49:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.350 22:49:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:36.350 22:49:08 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:36.350 22:49:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.350 22:49:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.350 22:49:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.350 22:49:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.350 22:49:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.350 22:49:08 -- setup/common.sh@18 -- # local node=0 00:03:36.350 22:49:08 -- setup/common.sh@19 -- # local var val 00:03:36.350 22:49:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.350 22:49:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.350 22:49:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.350 22:49:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.350 22:49:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.350 22:49:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26369700 kB' 'MemUsed: 6222384 kB' 'SwapCached: 0 kB' 'Active: 2299644 kB' 'Inactive: 275656 kB' 'Active(anon): 2139792 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 275656 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2420500 kB' 'Mapped: 78804 kB' 'AnonPages: 157916 kB' 'Shmem: 1984992 kB' 'KernelStack: 12840 kB' 'PageTables: 3568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 161504 kB' 'Slab: 437668 kB' 'SReclaimable: 161504 kB' 'SUnreclaim: 276164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.350 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.350 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.351 22:49:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.351 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.351 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.351 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.351 22:49:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.351 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.351 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.351 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.351 22:49:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.351 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.351 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.351 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.351 22:49:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.351 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.351 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.351 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.351 22:49:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.351 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.351 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.351 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.351 22:49:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.351 22:49:08 -- setup/common.sh@32 -- # continue 00:03:36.351 22:49:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.351 22:49:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.351 22:49:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.351 22:49:08 -- setup/common.sh@33 -- # echo 0 00:03:36.351 22:49:08 -- setup/common.sh@33 -- # return 0 00:03:36.351 22:49:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.351 22:49:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.351 22:49:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.351 22:49:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.351 22:49:08 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:36.351 node0=1024 expecting 1024 00:03:36.351 22:49:08 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:36.351 00:03:36.351 real 0m6.959s 00:03:36.351 user 0m2.574s 00:03:36.351 sys 0m4.525s 00:03:36.351 22:49:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.351 22:49:08 -- common/autotest_common.sh@10 -- # set +x 00:03:36.351 ************************************ 00:03:36.351 END TEST no_shrink_alloc 00:03:36.351 ************************************ 00:03:36.351 22:49:08 -- setup/hugepages.sh@217 -- # clear_hp 00:03:36.351 22:49:08 -- setup/hugepages.sh@37 -- # local node hp 00:03:36.351 22:49:08 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:36.351 22:49:08 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:36.351 22:49:08 -- setup/hugepages.sh@41 -- # echo 0 00:03:36.351 22:49:08 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:36.351 22:49:08 -- setup/hugepages.sh@41 -- # echo 0 00:03:36.351 22:49:08 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:36.351 22:49:08 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:36.351 22:49:08 -- setup/hugepages.sh@41 -- # echo 0 00:03:36.351 22:49:08 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:36.351 22:49:08 -- setup/hugepages.sh@41 -- # echo 0 00:03:36.351 22:49:08 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:36.351 22:49:08 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:36.351 00:03:36.351 real 0m27.202s 00:03:36.351 user 0m9.718s 00:03:36.351 sys 0m16.540s 00:03:36.351 22:49:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.351 22:49:08 -- common/autotest_common.sh@10 -- # set +x 00:03:36.351 ************************************ 00:03:36.351 END TEST hugepages 00:03:36.351 ************************************ 00:03:36.351 22:49:08 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:36.351 22:49:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:36.351 22:49:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:36.351 22:49:08 -- common/autotest_common.sh@10 -- # set +x 00:03:36.351 ************************************ 00:03:36.351 START TEST driver 00:03:36.351 ************************************ 00:03:36.351 22:49:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:36.610 * Looking for test storage... 00:03:36.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:36.610 22:49:08 -- setup/driver.sh@68 -- # setup reset 00:03:36.610 22:49:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.610 22:49:08 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.799 22:49:13 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:40.799 22:49:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:40.799 22:49:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:40.799 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:03:40.799 ************************************ 00:03:40.799 START TEST guess_driver 00:03:40.799 ************************************ 00:03:40.799 22:49:13 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:40.799 22:49:13 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:40.799 22:49:13 -- setup/driver.sh@47 -- # local fail=0 00:03:40.799 22:49:13 -- setup/driver.sh@49 -- # pick_driver 00:03:40.799 22:49:13 -- setup/driver.sh@36 -- # vfio 00:03:41.058 22:49:13 -- setup/driver.sh@21 -- # local iommu_grups 00:03:41.058 22:49:13 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:41.058 22:49:13 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:41.058 22:49:13 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:41.058 22:49:13 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:41.058 22:49:13 -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:03:41.058 22:49:13 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:41.058 22:49:13 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:41.058 22:49:13 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:41.058 22:49:13 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:41.058 22:49:13 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:41.058 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:41.058 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:41.058 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:41.058 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:41.058 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:41.058 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:41.058 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:41.058 22:49:13 -- setup/driver.sh@30 -- # return 0 00:03:41.058 22:49:13 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:41.058 22:49:13 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:41.058 22:49:13 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:41.058 22:49:13 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:41.058 Looking for driver=vfio-pci 00:03:41.058 22:49:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.058 22:49:13 -- setup/driver.sh@45 -- # setup output config 00:03:41.058 22:49:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.058 22:49:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:44.343 22:49:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.343 22:49:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.343 22:49:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.343 22:49:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.343 22:49:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.343 22:49:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.343 22:49:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.343 22:49:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.343 22:49:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.343 22:49:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.343 22:49:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.343 22:49:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.343 22:49:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.343 22:49:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.343 22:49:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.343 22:49:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.343 22:49:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.343 22:49:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.343 22:49:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.343 22:49:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.343 22:49:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.343 22:49:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.343 22:49:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.343 22:49:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.343 22:49:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.343 22:49:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.343 22:49:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.343 22:49:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.343 22:49:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.343 22:49:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.343 22:49:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.343 22:49:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.343 22:49:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.343 22:49:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.343 22:49:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.343 22:49:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.343 22:49:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.343 22:49:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.343 22:49:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.343 22:49:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.343 22:49:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.343 22:49:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.343 22:49:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.343 22:49:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.343 22:49:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.343 22:49:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.343 22:49:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.343 22:49:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:46.246 22:49:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:46.246 22:49:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:46.246 22:49:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:46.246 22:49:18 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:46.246 22:49:18 -- setup/driver.sh@65 -- # setup reset 00:03:46.246 22:49:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.246 22:49:18 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.517 00:03:51.517 real 0m9.683s 00:03:51.517 user 0m2.439s 00:03:51.517 sys 0m4.918s 00:03:51.517 22:49:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.517 22:49:22 -- common/autotest_common.sh@10 -- # set +x 00:03:51.517 ************************************ 00:03:51.517 END TEST guess_driver 00:03:51.517 ************************************ 00:03:51.517 00:03:51.517 real 0m14.231s 00:03:51.517 user 0m3.581s 00:03:51.517 sys 0m7.510s 00:03:51.517 22:49:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.517 22:49:22 -- common/autotest_common.sh@10 -- # set +x 00:03:51.517 ************************************ 00:03:51.517 END TEST driver 00:03:51.517 ************************************ 00:03:51.517 22:49:22 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:51.517 22:49:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:51.517 22:49:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:51.517 22:49:22 -- common/autotest_common.sh@10 -- # set +x 00:03:51.517 ************************************ 00:03:51.517 START TEST devices 00:03:51.517 ************************************ 00:03:51.517 22:49:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:51.517 * Looking for test storage... 00:03:51.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:51.517 22:49:23 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:51.517 22:49:23 -- setup/devices.sh@192 -- # setup reset 00:03:51.517 22:49:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.517 22:49:23 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.139 22:49:26 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:54.139 22:49:26 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:54.139 22:49:26 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:54.139 22:49:26 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:54.139 22:49:26 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:54.139 22:49:26 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:54.139 22:49:26 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:54.139 22:49:26 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:54.139 22:49:26 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:54.139 22:49:26 -- setup/devices.sh@196 -- # blocks=() 00:03:54.139 22:49:26 -- setup/devices.sh@196 -- # declare -a blocks 00:03:54.139 22:49:26 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:54.139 22:49:26 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:54.139 22:49:26 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:54.139 22:49:26 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:54.139 22:49:26 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:54.139 22:49:26 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:54.139 22:49:26 -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:03:54.139 22:49:26 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:54.139 22:49:26 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:54.139 22:49:26 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:54.139 22:49:26 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:54.139 No valid GPT data, bailing 00:03:54.139 22:49:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:54.139 22:49:26 -- scripts/common.sh@393 -- # pt= 00:03:54.139 22:49:26 -- scripts/common.sh@394 -- # return 1 00:03:54.139 22:49:26 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:54.139 22:49:26 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:54.139 22:49:26 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:54.139 22:49:26 -- setup/common.sh@80 -- # echo 1600321314816 00:03:54.139 22:49:26 -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:54.139 22:49:26 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:54.139 22:49:26 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:03:54.139 22:49:26 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:54.139 22:49:26 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:54.139 22:49:26 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:54.139 22:49:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:54.139 22:49:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:54.139 22:49:26 -- common/autotest_common.sh@10 -- # set +x 00:03:54.399 ************************************ 00:03:54.399 START TEST nvme_mount 00:03:54.399 ************************************ 00:03:54.399 22:49:26 -- common/autotest_common.sh@1104 -- # nvme_mount 00:03:54.399 22:49:26 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:54.399 22:49:26 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:54.399 22:49:26 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.399 22:49:26 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.399 22:49:26 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:54.399 22:49:26 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:54.399 22:49:26 -- setup/common.sh@40 -- # local part_no=1 00:03:54.399 22:49:26 -- setup/common.sh@41 -- # local size=1073741824 00:03:54.399 22:49:26 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:54.399 22:49:26 -- setup/common.sh@44 -- # parts=() 00:03:54.399 22:49:26 -- setup/common.sh@44 -- # local parts 00:03:54.399 22:49:26 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:54.399 22:49:26 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:54.399 22:49:26 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:54.399 22:49:26 -- setup/common.sh@46 -- # (( part++ )) 00:03:54.399 22:49:26 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:54.399 22:49:26 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:54.399 22:49:26 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:54.399 22:49:26 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:55.338 Creating new GPT entries in memory. 00:03:55.338 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:55.338 other utilities. 00:03:55.338 22:49:27 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:55.338 22:49:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:55.338 22:49:27 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:55.338 22:49:27 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:55.338 22:49:27 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:56.275 Creating new GPT entries in memory. 00:03:56.275 The operation has completed successfully. 00:03:56.275 22:49:28 -- setup/common.sh@57 -- # (( part++ )) 00:03:56.275 22:49:28 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:56.275 22:49:28 -- setup/common.sh@62 -- # wait 3004900 00:03:56.275 22:49:28 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.275 22:49:28 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:56.275 22:49:28 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.275 22:49:28 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:56.275 22:49:28 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:56.275 22:49:28 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.534 22:49:28 -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.534 22:49:28 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:56.534 22:49:28 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:56.534 22:49:28 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.534 22:49:28 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.534 22:49:28 -- setup/devices.sh@53 -- # local found=0 00:03:56.534 22:49:28 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.535 22:49:28 -- setup/devices.sh@56 -- # : 00:03:56.535 22:49:28 -- setup/devices.sh@59 -- # local pci status 00:03:56.535 22:49:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.535 22:49:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:56.535 22:49:28 -- setup/devices.sh@47 -- # setup output config 00:03:56.535 22:49:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.535 22:49:28 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:59.071 22:49:31 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.071 22:49:31 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:59.071 22:49:31 -- setup/devices.sh@63 -- # found=1 00:03:59.071 22:49:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.071 22:49:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.071 22:49:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.071 22:49:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.071 22:49:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.071 22:49:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.071 22:49:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.071 22:49:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.071 22:49:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.071 22:49:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.071 22:49:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.071 22:49:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.071 22:49:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.071 22:49:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.071 22:49:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.071 22:49:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.071 22:49:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.071 22:49:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.071 22:49:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.071 22:49:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.071 22:49:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.071 22:49:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.071 22:49:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.071 22:49:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.071 22:49:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.071 22:49:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.071 22:49:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.071 22:49:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.071 22:49:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.071 22:49:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.071 22:49:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.071 22:49:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.071 22:49:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.071 22:49:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:59.071 22:49:31 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:59.071 22:49:31 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.071 22:49:31 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:59.071 22:49:31 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:59.071 22:49:31 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:59.071 22:49:31 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.071 22:49:31 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.071 22:49:31 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.071 22:49:31 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:59.071 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:59.071 22:49:31 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:59.071 22:49:31 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:59.331 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:59.332 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:59.332 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:59.332 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:59.332 22:49:31 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:59.332 22:49:31 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:59.332 22:49:31 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.332 22:49:31 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:59.332 22:49:31 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:59.332 22:49:31 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.332 22:49:31 -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:59.332 22:49:31 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:59.332 22:49:31 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:59.332 22:49:31 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.332 22:49:31 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:59.332 22:49:31 -- setup/devices.sh@53 -- # local found=0 00:03:59.332 22:49:31 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:59.332 22:49:31 -- setup/devices.sh@56 -- # : 00:03:59.332 22:49:31 -- setup/devices.sh@59 -- # local pci status 00:03:59.332 22:49:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.332 22:49:31 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:59.332 22:49:31 -- setup/devices.sh@47 -- # setup output config 00:03:59.332 22:49:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.332 22:49:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:02.623 22:49:34 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.623 22:49:34 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:02.623 22:49:34 -- setup/devices.sh@63 -- # found=1 00:04:02.623 22:49:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.624 22:49:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.624 22:49:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.624 22:49:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.624 22:49:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.624 22:49:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.624 22:49:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.624 22:49:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.624 22:49:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.624 22:49:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.624 22:49:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.624 22:49:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.624 22:49:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.624 22:49:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.624 22:49:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.624 22:49:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.624 22:49:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.624 22:49:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.624 22:49:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.624 22:49:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.624 22:49:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.624 22:49:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.624 22:49:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.624 22:49:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.624 22:49:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.624 22:49:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.624 22:49:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.624 22:49:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.624 22:49:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.624 22:49:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.624 22:49:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.624 22:49:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.624 22:49:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.624 22:49:34 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:02.624 22:49:34 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:02.624 22:49:34 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.624 22:49:34 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:02.624 22:49:34 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:02.624 22:49:34 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.624 22:49:34 -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:02.624 22:49:34 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:02.624 22:49:34 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:02.624 22:49:34 -- setup/devices.sh@50 -- # local mount_point= 00:04:02.624 22:49:34 -- setup/devices.sh@51 -- # local test_file= 00:04:02.624 22:49:34 -- setup/devices.sh@53 -- # local found=0 00:04:02.624 22:49:34 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:02.624 22:49:34 -- setup/devices.sh@59 -- # local pci status 00:04:02.624 22:49:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.624 22:49:34 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:02.624 22:49:34 -- setup/devices.sh@47 -- # setup output config 00:04:02.624 22:49:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.624 22:49:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:05.915 22:49:38 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.915 22:49:38 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:05.915 22:49:38 -- setup/devices.sh@63 -- # found=1 00:04:05.915 22:49:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.915 22:49:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.915 22:49:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.915 22:49:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.915 22:49:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.915 22:49:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.915 22:49:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.915 22:49:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.915 22:49:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.915 22:49:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.915 22:49:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.915 22:49:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.915 22:49:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.915 22:49:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.915 22:49:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.915 22:49:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.915 22:49:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.915 22:49:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.915 22:49:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.915 22:49:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.915 22:49:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.915 22:49:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.915 22:49:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.915 22:49:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.915 22:49:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.915 22:49:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.915 22:49:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.915 22:49:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.915 22:49:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.915 22:49:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.915 22:49:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.915 22:49:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.915 22:49:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.174 22:49:38 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:06.174 22:49:38 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:06.174 22:49:38 -- setup/devices.sh@68 -- # return 0 00:04:06.174 22:49:38 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:06.174 22:49:38 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.174 22:49:38 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.174 22:49:38 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:06.174 22:49:38 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:06.174 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:06.174 00:04:06.174 real 0m11.795s 00:04:06.174 user 0m3.233s 00:04:06.174 sys 0m6.250s 00:04:06.174 22:49:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.174 22:49:38 -- common/autotest_common.sh@10 -- # set +x 00:04:06.174 ************************************ 00:04:06.174 END TEST nvme_mount 00:04:06.174 ************************************ 00:04:06.174 22:49:38 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:06.174 22:49:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:06.174 22:49:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:06.174 22:49:38 -- common/autotest_common.sh@10 -- # set +x 00:04:06.174 ************************************ 00:04:06.174 START TEST dm_mount 00:04:06.174 ************************************ 00:04:06.174 22:49:38 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:06.174 22:49:38 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:06.174 22:49:38 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:06.174 22:49:38 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:06.174 22:49:38 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:06.174 22:49:38 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:06.174 22:49:38 -- setup/common.sh@40 -- # local part_no=2 00:04:06.174 22:49:38 -- setup/common.sh@41 -- # local size=1073741824 00:04:06.174 22:49:38 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:06.174 22:49:38 -- setup/common.sh@44 -- # parts=() 00:04:06.174 22:49:38 -- setup/common.sh@44 -- # local parts 00:04:06.174 22:49:38 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:06.174 22:49:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:06.174 22:49:38 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:06.174 22:49:38 -- setup/common.sh@46 -- # (( part++ )) 00:04:06.174 22:49:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:06.174 22:49:38 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:06.174 22:49:38 -- setup/common.sh@46 -- # (( part++ )) 00:04:06.174 22:49:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:06.174 22:49:38 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:06.174 22:49:38 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:06.174 22:49:38 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:07.111 Creating new GPT entries in memory. 00:04:07.111 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:07.111 other utilities. 00:04:07.111 22:49:39 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:07.111 22:49:39 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:07.111 22:49:39 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:07.111 22:49:39 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:07.111 22:49:39 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:08.048 Creating new GPT entries in memory. 00:04:08.048 The operation has completed successfully. 00:04:08.048 22:49:40 -- setup/common.sh@57 -- # (( part++ )) 00:04:08.048 22:49:40 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:08.049 22:49:40 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:08.049 22:49:40 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:08.049 22:49:40 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:09.426 The operation has completed successfully. 00:04:09.426 22:49:41 -- setup/common.sh@57 -- # (( part++ )) 00:04:09.426 22:49:41 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:09.426 22:49:41 -- setup/common.sh@62 -- # wait 3009141 00:04:09.426 22:49:41 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:09.426 22:49:41 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.426 22:49:41 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:09.426 22:49:41 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:09.426 22:49:41 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:09.426 22:49:41 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:09.426 22:49:41 -- setup/devices.sh@161 -- # break 00:04:09.426 22:49:41 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:09.426 22:49:41 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:09.426 22:49:41 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:09.426 22:49:41 -- setup/devices.sh@166 -- # dm=dm-0 00:04:09.426 22:49:41 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:09.426 22:49:41 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:09.426 22:49:41 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.426 22:49:41 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:09.426 22:49:41 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.426 22:49:41 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:09.426 22:49:41 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:09.426 22:49:41 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.426 22:49:41 -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:09.426 22:49:41 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:09.426 22:49:41 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:09.426 22:49:41 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.426 22:49:41 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:09.426 22:49:41 -- setup/devices.sh@53 -- # local found=0 00:04:09.426 22:49:41 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:09.426 22:49:41 -- setup/devices.sh@56 -- # : 00:04:09.426 22:49:41 -- setup/devices.sh@59 -- # local pci status 00:04:09.426 22:49:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.426 22:49:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:09.426 22:49:41 -- setup/devices.sh@47 -- # setup output config 00:04:09.426 22:49:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.426 22:49:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:12.773 22:49:44 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.773 22:49:44 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:12.773 22:49:44 -- setup/devices.sh@63 -- # found=1 00:04:12.773 22:49:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.773 22:49:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.773 22:49:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.773 22:49:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.773 22:49:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.773 22:49:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.773 22:49:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.773 22:49:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.773 22:49:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.773 22:49:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.773 22:49:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.773 22:49:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.773 22:49:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.773 22:49:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.773 22:49:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.773 22:49:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.773 22:49:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.773 22:49:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.773 22:49:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.773 22:49:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.773 22:49:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.773 22:49:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.773 22:49:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.773 22:49:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.773 22:49:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.773 22:49:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.773 22:49:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.773 22:49:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.773 22:49:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.773 22:49:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.773 22:49:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.773 22:49:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:12.773 22:49:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.773 22:49:44 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:12.773 22:49:44 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:12.773 22:49:44 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:12.773 22:49:44 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:12.773 22:49:44 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:12.773 22:49:44 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:12.773 22:49:44 -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:12.773 22:49:44 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:12.773 22:49:44 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:12.773 22:49:44 -- setup/devices.sh@50 -- # local mount_point= 00:04:12.773 22:49:44 -- setup/devices.sh@51 -- # local test_file= 00:04:12.773 22:49:44 -- setup/devices.sh@53 -- # local found=0 00:04:12.773 22:49:44 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:12.773 22:49:44 -- setup/devices.sh@59 -- # local pci status 00:04:12.773 22:49:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.773 22:49:44 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:12.773 22:49:44 -- setup/devices.sh@47 -- # setup output config 00:04:12.773 22:49:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.773 22:49:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:15.310 22:49:47 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.310 22:49:47 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:15.310 22:49:47 -- setup/devices.sh@63 -- # found=1 00:04:15.310 22:49:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.310 22:49:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.310 22:49:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.310 22:49:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.310 22:49:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.310 22:49:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.310 22:49:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.310 22:49:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.310 22:49:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.310 22:49:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.310 22:49:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.310 22:49:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.310 22:49:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.310 22:49:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.310 22:49:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.310 22:49:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.310 22:49:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.310 22:49:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.310 22:49:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.570 22:49:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.570 22:49:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.570 22:49:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.570 22:49:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.570 22:49:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.570 22:49:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.570 22:49:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.570 22:49:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.570 22:49:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.570 22:49:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.570 22:49:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.570 22:49:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.570 22:49:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.570 22:49:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.570 22:49:47 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:15.570 22:49:47 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:15.570 22:49:47 -- setup/devices.sh@68 -- # return 0 00:04:15.570 22:49:47 -- setup/devices.sh@187 -- # cleanup_dm 00:04:15.570 22:49:47 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:15.570 22:49:47 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:15.570 22:49:47 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:15.570 22:49:47 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.570 22:49:47 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:15.570 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:15.570 22:49:47 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:15.570 22:49:47 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:15.570 00:04:15.570 real 0m9.567s 00:04:15.570 user 0m2.353s 00:04:15.570 sys 0m4.323s 00:04:15.570 22:49:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.570 22:49:47 -- common/autotest_common.sh@10 -- # set +x 00:04:15.570 ************************************ 00:04:15.570 END TEST dm_mount 00:04:15.570 ************************************ 00:04:15.829 22:49:48 -- setup/devices.sh@1 -- # cleanup 00:04:15.829 22:49:48 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:15.829 22:49:48 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.829 22:49:48 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.829 22:49:48 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:15.829 22:49:48 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:15.829 22:49:48 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:16.089 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:16.089 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:16.089 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:16.089 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:16.089 22:49:48 -- setup/devices.sh@12 -- # cleanup_dm 00:04:16.089 22:49:48 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.089 22:49:48 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:16.089 22:49:48 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:16.089 22:49:48 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:16.089 22:49:48 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:16.089 22:49:48 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:16.089 00:04:16.089 real 0m25.316s 00:04:16.089 user 0m6.795s 00:04:16.089 sys 0m13.175s 00:04:16.089 22:49:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.089 22:49:48 -- common/autotest_common.sh@10 -- # set +x 00:04:16.089 ************************************ 00:04:16.089 END TEST devices 00:04:16.089 ************************************ 00:04:16.089 00:04:16.089 real 1m30.396s 00:04:16.089 user 0m27.232s 00:04:16.089 sys 0m51.766s 00:04:16.089 22:49:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.089 22:49:48 -- common/autotest_common.sh@10 -- # set +x 00:04:16.089 ************************************ 00:04:16.089 END TEST setup.sh 00:04:16.089 ************************************ 00:04:16.089 22:49:48 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:19.382 Hugepages 00:04:19.382 node hugesize free / total 00:04:19.383 node0 1048576kB 0 / 0 00:04:19.383 node0 2048kB 2048 / 2048 00:04:19.383 node1 1048576kB 0 / 0 00:04:19.383 node1 2048kB 0 / 0 00:04:19.383 00:04:19.383 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:19.383 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:19.383 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:19.383 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:19.383 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:19.383 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:19.383 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:19.383 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:19.383 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:19.383 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:19.383 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:19.383 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:19.383 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:19.383 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:19.383 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:19.383 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:19.383 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:19.383 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:19.383 22:49:51 -- spdk/autotest.sh@141 -- # uname -s 00:04:19.383 22:49:51 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:19.383 22:49:51 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:19.383 22:49:51 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:22.676 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:22.676 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:22.676 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:22.676 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:22.676 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:22.676 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:22.676 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:22.676 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:22.676 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:22.676 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:22.676 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:22.676 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:22.676 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:22.676 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:22.676 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:22.676 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:24.055 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:24.055 22:49:56 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:24.993 22:49:57 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:24.993 22:49:57 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:24.993 22:49:57 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:24.993 22:49:57 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:24.993 22:49:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:24.993 22:49:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:24.993 22:49:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:24.993 22:49:57 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:24.993 22:49:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:24.993 22:49:57 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:24.993 22:49:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:04:24.993 22:49:57 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:28.285 Waiting for block devices as requested 00:04:28.285 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:28.544 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:28.544 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:28.544 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:28.544 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:28.804 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:28.804 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:28.804 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:29.063 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:29.063 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:29.063 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:29.323 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:29.323 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:29.323 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:29.323 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:29.583 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:29.583 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:29.842 22:50:02 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:29.842 22:50:02 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:29.842 22:50:02 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:04:29.842 22:50:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:29.842 22:50:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:29.842 22:50:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:29.842 22:50:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:29.842 22:50:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:29.842 22:50:02 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:29.842 22:50:02 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:29.842 22:50:02 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:29.842 22:50:02 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:29.842 22:50:02 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:29.842 22:50:02 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:04:29.842 22:50:02 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:29.842 22:50:02 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:29.842 22:50:02 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:29.842 22:50:02 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:29.842 22:50:02 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:29.842 22:50:02 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:29.842 22:50:02 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:29.842 22:50:02 -- common/autotest_common.sh@1542 -- # continue 00:04:29.842 22:50:02 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:29.842 22:50:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:29.842 22:50:02 -- common/autotest_common.sh@10 -- # set +x 00:04:29.842 22:50:02 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:29.842 22:50:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:29.842 22:50:02 -- common/autotest_common.sh@10 -- # set +x 00:04:29.842 22:50:02 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:33.133 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:33.133 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:33.133 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:33.133 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:33.133 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:33.133 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:33.133 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:33.133 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:33.133 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:33.133 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:33.133 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:33.133 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:33.133 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:33.133 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:33.133 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:33.133 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:35.110 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:35.110 22:50:07 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:35.110 22:50:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:35.110 22:50:07 -- common/autotest_common.sh@10 -- # set +x 00:04:35.110 22:50:07 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:35.110 22:50:07 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:35.110 22:50:07 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:35.110 22:50:07 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:35.110 22:50:07 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:35.110 22:50:07 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:35.110 22:50:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:35.110 22:50:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:35.110 22:50:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:35.110 22:50:07 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:35.110 22:50:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:35.110 22:50:07 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:35.110 22:50:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:04:35.110 22:50:07 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:35.110 22:50:07 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:35.110 22:50:07 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:04:35.110 22:50:07 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:35.110 22:50:07 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:04:35.110 22:50:07 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:d8:00.0 00:04:35.110 22:50:07 -- common/autotest_common.sh@1577 -- # [[ -z 0000:d8:00.0 ]] 00:04:35.110 22:50:07 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3018985 00:04:35.110 22:50:07 -- common/autotest_common.sh@1583 -- # waitforlisten 3018985 00:04:35.110 22:50:07 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.110 22:50:07 -- common/autotest_common.sh@819 -- # '[' -z 3018985 ']' 00:04:35.110 22:50:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.110 22:50:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:35.110 22:50:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.110 22:50:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:35.110 22:50:07 -- common/autotest_common.sh@10 -- # set +x 00:04:35.110 [2024-07-24 22:50:07.342608] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:35.110 [2024-07-24 22:50:07.342663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3018985 ] 00:04:35.110 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.110 [2024-07-24 22:50:07.412276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.110 [2024-07-24 22:50:07.450214] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:35.110 [2024-07-24 22:50:07.450335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.046 22:50:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:36.046 22:50:08 -- common/autotest_common.sh@852 -- # return 0 00:04:36.046 22:50:08 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:36.046 22:50:08 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:36.046 22:50:08 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:39.337 nvme0n1 00:04:39.337 22:50:11 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:39.337 [2024-07-24 22:50:11.269297] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:39.337 request: 00:04:39.337 { 00:04:39.337 "nvme_ctrlr_name": "nvme0", 00:04:39.337 "password": "test", 00:04:39.337 "method": "bdev_nvme_opal_revert", 00:04:39.337 "req_id": 1 00:04:39.337 } 00:04:39.337 Got JSON-RPC error response 00:04:39.337 response: 00:04:39.337 { 00:04:39.337 "code": -32602, 00:04:39.337 "message": "Invalid parameters" 00:04:39.337 } 00:04:39.337 22:50:11 -- common/autotest_common.sh@1589 -- # true 00:04:39.337 22:50:11 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:39.337 22:50:11 -- common/autotest_common.sh@1593 -- # killprocess 3018985 00:04:39.337 22:50:11 -- common/autotest_common.sh@926 -- # '[' -z 3018985 ']' 00:04:39.337 22:50:11 -- common/autotest_common.sh@930 -- # kill -0 3018985 00:04:39.337 22:50:11 -- common/autotest_common.sh@931 -- # uname 00:04:39.337 22:50:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:39.337 22:50:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3018985 00:04:39.337 22:50:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:39.337 22:50:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:39.337 22:50:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3018985' 00:04:39.337 killing process with pid 3018985 00:04:39.337 22:50:11 -- common/autotest_common.sh@945 -- # kill 3018985 00:04:39.337 22:50:11 -- common/autotest_common.sh@950 -- # wait 3018985 00:04:41.254 22:50:13 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:41.254 22:50:13 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:41.254 22:50:13 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:41.254 22:50:13 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:41.254 22:50:13 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:41.254 22:50:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:41.254 22:50:13 -- common/autotest_common.sh@10 -- # set +x 00:04:41.254 22:50:13 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:41.254 22:50:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:41.254 22:50:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:41.254 22:50:13 -- common/autotest_common.sh@10 -- # set +x 00:04:41.254 ************************************ 00:04:41.254 START TEST env 00:04:41.254 ************************************ 00:04:41.254 22:50:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:41.254 * Looking for test storage... 00:04:41.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:41.254 22:50:13 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:41.254 22:50:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:41.254 22:50:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:41.254 22:50:13 -- common/autotest_common.sh@10 -- # set +x 00:04:41.254 ************************************ 00:04:41.254 START TEST env_memory 00:04:41.254 ************************************ 00:04:41.254 22:50:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:41.254 00:04:41.254 00:04:41.254 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.254 http://cunit.sourceforge.net/ 00:04:41.254 00:04:41.254 00:04:41.254 Suite: memory 00:04:41.254 Test: alloc and free memory map ...[2024-07-24 22:50:13.677142] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:41.513 passed 00:04:41.513 Test: mem map translation ...[2024-07-24 22:50:13.696094] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:41.513 [2024-07-24 22:50:13.696110] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:41.513 [2024-07-24 22:50:13.696146] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:41.513 [2024-07-24 22:50:13.696156] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:41.513 passed 00:04:41.513 Test: mem map registration ...[2024-07-24 22:50:13.733105] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:41.513 [2024-07-24 22:50:13.733123] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:41.513 passed 00:04:41.513 Test: mem map adjacent registrations ...passed 00:04:41.513 00:04:41.513 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.513 suites 1 1 n/a 0 0 00:04:41.513 tests 4 4 4 0 0 00:04:41.513 asserts 152 152 152 0 n/a 00:04:41.513 00:04:41.513 Elapsed time = 0.136 seconds 00:04:41.513 00:04:41.513 real 0m0.151s 00:04:41.513 user 0m0.140s 00:04:41.513 sys 0m0.010s 00:04:41.513 22:50:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.513 22:50:13 -- common/autotest_common.sh@10 -- # set +x 00:04:41.513 ************************************ 00:04:41.513 END TEST env_memory 00:04:41.514 ************************************ 00:04:41.514 22:50:13 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:41.514 22:50:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:41.514 22:50:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:41.514 22:50:13 -- common/autotest_common.sh@10 -- # set +x 00:04:41.514 ************************************ 00:04:41.514 START TEST env_vtophys 00:04:41.514 ************************************ 00:04:41.514 22:50:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:41.514 EAL: lib.eal log level changed from notice to debug 00:04:41.514 EAL: Detected lcore 0 as core 0 on socket 0 00:04:41.514 EAL: Detected lcore 1 as core 1 on socket 0 00:04:41.514 EAL: Detected lcore 2 as core 2 on socket 0 00:04:41.514 EAL: Detected lcore 3 as core 3 on socket 0 00:04:41.514 EAL: Detected lcore 4 as core 4 on socket 0 00:04:41.514 EAL: Detected lcore 5 as core 5 on socket 0 00:04:41.514 EAL: Detected lcore 6 as core 6 on socket 0 00:04:41.514 EAL: Detected lcore 7 as core 8 on socket 0 00:04:41.514 EAL: Detected lcore 8 as core 9 on socket 0 00:04:41.514 EAL: Detected lcore 9 as core 10 on socket 0 00:04:41.514 EAL: Detected lcore 10 as core 11 on socket 0 00:04:41.514 EAL: Detected lcore 11 as core 12 on socket 0 00:04:41.514 EAL: Detected lcore 12 as core 13 on socket 0 00:04:41.514 EAL: Detected lcore 13 as core 14 on socket 0 00:04:41.514 EAL: Detected lcore 14 as core 16 on socket 0 00:04:41.514 EAL: Detected lcore 15 as core 17 on socket 0 00:04:41.514 EAL: Detected lcore 16 as core 18 on socket 0 00:04:41.514 EAL: Detected lcore 17 as core 19 on socket 0 00:04:41.514 EAL: Detected lcore 18 as core 20 on socket 0 00:04:41.514 EAL: Detected lcore 19 as core 21 on socket 0 00:04:41.514 EAL: Detected lcore 20 as core 22 on socket 0 00:04:41.514 EAL: Detected lcore 21 as core 24 on socket 0 00:04:41.514 EAL: Detected lcore 22 as core 25 on socket 0 00:04:41.514 EAL: Detected lcore 23 as core 26 on socket 0 00:04:41.514 EAL: Detected lcore 24 as core 27 on socket 0 00:04:41.514 EAL: Detected lcore 25 as core 28 on socket 0 00:04:41.514 EAL: Detected lcore 26 as core 29 on socket 0 00:04:41.514 EAL: Detected lcore 27 as core 30 on socket 0 00:04:41.514 EAL: Detected lcore 28 as core 0 on socket 1 00:04:41.514 EAL: Detected lcore 29 as core 1 on socket 1 00:04:41.514 EAL: Detected lcore 30 as core 2 on socket 1 00:04:41.514 EAL: Detected lcore 31 as core 3 on socket 1 00:04:41.514 EAL: Detected lcore 32 as core 4 on socket 1 00:04:41.514 EAL: Detected lcore 33 as core 5 on socket 1 00:04:41.514 EAL: Detected lcore 34 as core 6 on socket 1 00:04:41.514 EAL: Detected lcore 35 as core 8 on socket 1 00:04:41.514 EAL: Detected lcore 36 as core 9 on socket 1 00:04:41.514 EAL: Detected lcore 37 as core 10 on socket 1 00:04:41.514 EAL: Detected lcore 38 as core 11 on socket 1 00:04:41.514 EAL: Detected lcore 39 as core 12 on socket 1 00:04:41.514 EAL: Detected lcore 40 as core 13 on socket 1 00:04:41.514 EAL: Detected lcore 41 as core 14 on socket 1 00:04:41.514 EAL: Detected lcore 42 as core 16 on socket 1 00:04:41.514 EAL: Detected lcore 43 as core 17 on socket 1 00:04:41.514 EAL: Detected lcore 44 as core 18 on socket 1 00:04:41.514 EAL: Detected lcore 45 as core 19 on socket 1 00:04:41.514 EAL: Detected lcore 46 as core 20 on socket 1 00:04:41.514 EAL: Detected lcore 47 as core 21 on socket 1 00:04:41.514 EAL: Detected lcore 48 as core 22 on socket 1 00:04:41.514 EAL: Detected lcore 49 as core 24 on socket 1 00:04:41.514 EAL: Detected lcore 50 as core 25 on socket 1 00:04:41.514 EAL: Detected lcore 51 as core 26 on socket 1 00:04:41.514 EAL: Detected lcore 52 as core 27 on socket 1 00:04:41.514 EAL: Detected lcore 53 as core 28 on socket 1 00:04:41.514 EAL: Detected lcore 54 as core 29 on socket 1 00:04:41.514 EAL: Detected lcore 55 as core 30 on socket 1 00:04:41.514 EAL: Detected lcore 56 as core 0 on socket 0 00:04:41.514 EAL: Detected lcore 57 as core 1 on socket 0 00:04:41.514 EAL: Detected lcore 58 as core 2 on socket 0 00:04:41.514 EAL: Detected lcore 59 as core 3 on socket 0 00:04:41.514 EAL: Detected lcore 60 as core 4 on socket 0 00:04:41.514 EAL: Detected lcore 61 as core 5 on socket 0 00:04:41.514 EAL: Detected lcore 62 as core 6 on socket 0 00:04:41.514 EAL: Detected lcore 63 as core 8 on socket 0 00:04:41.514 EAL: Detected lcore 64 as core 9 on socket 0 00:04:41.514 EAL: Detected lcore 65 as core 10 on socket 0 00:04:41.514 EAL: Detected lcore 66 as core 11 on socket 0 00:04:41.514 EAL: Detected lcore 67 as core 12 on socket 0 00:04:41.514 EAL: Detected lcore 68 as core 13 on socket 0 00:04:41.514 EAL: Detected lcore 69 as core 14 on socket 0 00:04:41.514 EAL: Detected lcore 70 as core 16 on socket 0 00:04:41.514 EAL: Detected lcore 71 as core 17 on socket 0 00:04:41.514 EAL: Detected lcore 72 as core 18 on socket 0 00:04:41.514 EAL: Detected lcore 73 as core 19 on socket 0 00:04:41.514 EAL: Detected lcore 74 as core 20 on socket 0 00:04:41.514 EAL: Detected lcore 75 as core 21 on socket 0 00:04:41.514 EAL: Detected lcore 76 as core 22 on socket 0 00:04:41.514 EAL: Detected lcore 77 as core 24 on socket 0 00:04:41.514 EAL: Detected lcore 78 as core 25 on socket 0 00:04:41.514 EAL: Detected lcore 79 as core 26 on socket 0 00:04:41.514 EAL: Detected lcore 80 as core 27 on socket 0 00:04:41.514 EAL: Detected lcore 81 as core 28 on socket 0 00:04:41.514 EAL: Detected lcore 82 as core 29 on socket 0 00:04:41.514 EAL: Detected lcore 83 as core 30 on socket 0 00:04:41.514 EAL: Detected lcore 84 as core 0 on socket 1 00:04:41.514 EAL: Detected lcore 85 as core 1 on socket 1 00:04:41.514 EAL: Detected lcore 86 as core 2 on socket 1 00:04:41.514 EAL: Detected lcore 87 as core 3 on socket 1 00:04:41.514 EAL: Detected lcore 88 as core 4 on socket 1 00:04:41.514 EAL: Detected lcore 89 as core 5 on socket 1 00:04:41.514 EAL: Detected lcore 90 as core 6 on socket 1 00:04:41.514 EAL: Detected lcore 91 as core 8 on socket 1 00:04:41.514 EAL: Detected lcore 92 as core 9 on socket 1 00:04:41.514 EAL: Detected lcore 93 as core 10 on socket 1 00:04:41.514 EAL: Detected lcore 94 as core 11 on socket 1 00:04:41.514 EAL: Detected lcore 95 as core 12 on socket 1 00:04:41.514 EAL: Detected lcore 96 as core 13 on socket 1 00:04:41.514 EAL: Detected lcore 97 as core 14 on socket 1 00:04:41.514 EAL: Detected lcore 98 as core 16 on socket 1 00:04:41.514 EAL: Detected lcore 99 as core 17 on socket 1 00:04:41.514 EAL: Detected lcore 100 as core 18 on socket 1 00:04:41.514 EAL: Detected lcore 101 as core 19 on socket 1 00:04:41.514 EAL: Detected lcore 102 as core 20 on socket 1 00:04:41.514 EAL: Detected lcore 103 as core 21 on socket 1 00:04:41.514 EAL: Detected lcore 104 as core 22 on socket 1 00:04:41.514 EAL: Detected lcore 105 as core 24 on socket 1 00:04:41.514 EAL: Detected lcore 106 as core 25 on socket 1 00:04:41.514 EAL: Detected lcore 107 as core 26 on socket 1 00:04:41.514 EAL: Detected lcore 108 as core 27 on socket 1 00:04:41.514 EAL: Detected lcore 109 as core 28 on socket 1 00:04:41.514 EAL: Detected lcore 110 as core 29 on socket 1 00:04:41.514 EAL: Detected lcore 111 as core 30 on socket 1 00:04:41.514 EAL: Maximum logical cores by configuration: 128 00:04:41.514 EAL: Detected CPU lcores: 112 00:04:41.514 EAL: Detected NUMA nodes: 2 00:04:41.514 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:41.514 EAL: Detected shared linkage of DPDK 00:04:41.514 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:04:41.514 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:04:41.514 EAL: Registered [vdev] bus. 00:04:41.514 EAL: bus.vdev log level changed from disabled to notice 00:04:41.514 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:04:41.514 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:04:41.514 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:41.514 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:41.514 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:41.514 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:41.514 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:41.514 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:41.514 EAL: No shared files mode enabled, IPC will be disabled 00:04:41.514 EAL: No shared files mode enabled, IPC is disabled 00:04:41.514 EAL: Bus pci wants IOVA as 'DC' 00:04:41.514 EAL: Bus vdev wants IOVA as 'DC' 00:04:41.515 EAL: Buses did not request a specific IOVA mode. 00:04:41.515 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:41.515 EAL: Selected IOVA mode 'VA' 00:04:41.515 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.515 EAL: Probing VFIO support... 00:04:41.515 EAL: IOMMU type 1 (Type 1) is supported 00:04:41.515 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:41.515 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:41.515 EAL: VFIO support initialized 00:04:41.515 EAL: Ask a virtual area of 0x2e000 bytes 00:04:41.515 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:41.515 EAL: Setting up physically contiguous memory... 00:04:41.515 EAL: Setting maximum number of open files to 524288 00:04:41.515 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:41.515 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:41.515 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:41.515 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.515 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:41.515 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.515 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.515 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:41.515 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:41.515 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.515 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:41.515 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.515 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.515 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:41.515 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:41.515 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.515 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:41.515 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.515 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.515 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:41.515 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:41.515 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.515 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:41.515 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.515 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.515 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:41.515 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:41.515 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:41.515 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.515 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:41.515 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:41.515 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.515 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:41.515 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:41.515 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.515 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:41.515 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:41.515 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.515 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:41.515 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:41.515 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.515 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:41.515 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:41.515 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.515 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:41.515 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:41.515 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.515 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:41.515 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:41.515 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.515 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:41.515 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:41.515 EAL: Hugepages will be freed exactly as allocated. 00:04:41.515 EAL: No shared files mode enabled, IPC is disabled 00:04:41.515 EAL: No shared files mode enabled, IPC is disabled 00:04:41.515 EAL: TSC frequency is ~2500000 KHz 00:04:41.515 EAL: Main lcore 0 is ready (tid=7f41fa405a00;cpuset=[0]) 00:04:41.515 EAL: Trying to obtain current memory policy. 00:04:41.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.515 EAL: Restoring previous memory policy: 0 00:04:41.515 EAL: request: mp_malloc_sync 00:04:41.515 EAL: No shared files mode enabled, IPC is disabled 00:04:41.515 EAL: Heap on socket 0 was expanded by 2MB 00:04:41.515 EAL: PCI device 0000:41:00.0 on NUMA socket 0 00:04:41.515 EAL: probe driver: 8086:37d2 net_i40e 00:04:41.515 EAL: Not managed by a supported kernel driver, skipped 00:04:41.515 EAL: PCI device 0000:41:00.1 on NUMA socket 0 00:04:41.515 EAL: probe driver: 8086:37d2 net_i40e 00:04:41.515 EAL: Not managed by a supported kernel driver, skipped 00:04:41.515 EAL: No shared files mode enabled, IPC is disabled 00:04:41.515 EAL: No shared files mode enabled, IPC is disabled 00:04:41.515 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:41.515 EAL: Mem event callback 'spdk:(nil)' registered 00:04:41.515 00:04:41.515 00:04:41.515 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.515 http://cunit.sourceforge.net/ 00:04:41.515 00:04:41.515 00:04:41.515 Suite: components_suite 00:04:41.515 Test: vtophys_malloc_test ...passed 00:04:41.515 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:41.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.515 EAL: Restoring previous memory policy: 4 00:04:41.515 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.515 EAL: request: mp_malloc_sync 00:04:41.515 EAL: No shared files mode enabled, IPC is disabled 00:04:41.515 EAL: Heap on socket 0 was expanded by 4MB 00:04:41.515 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.515 EAL: request: mp_malloc_sync 00:04:41.515 EAL: No shared files mode enabled, IPC is disabled 00:04:41.515 EAL: Heap on socket 0 was shrunk by 4MB 00:04:41.515 EAL: Trying to obtain current memory policy. 00:04:41.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.515 EAL: Restoring previous memory policy: 4 00:04:41.515 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.515 EAL: request: mp_malloc_sync 00:04:41.515 EAL: No shared files mode enabled, IPC is disabled 00:04:41.515 EAL: Heap on socket 0 was expanded by 6MB 00:04:41.515 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.515 EAL: request: mp_malloc_sync 00:04:41.515 EAL: No shared files mode enabled, IPC is disabled 00:04:41.515 EAL: Heap on socket 0 was shrunk by 6MB 00:04:41.515 EAL: Trying to obtain current memory policy. 00:04:41.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.515 EAL: Restoring previous memory policy: 4 00:04:41.515 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.515 EAL: request: mp_malloc_sync 00:04:41.515 EAL: No shared files mode enabled, IPC is disabled 00:04:41.515 EAL: Heap on socket 0 was expanded by 10MB 00:04:41.515 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.515 EAL: request: mp_malloc_sync 00:04:41.515 EAL: No shared files mode enabled, IPC is disabled 00:04:41.515 EAL: Heap on socket 0 was shrunk by 10MB 00:04:41.515 EAL: Trying to obtain current memory policy. 00:04:41.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.515 EAL: Restoring previous memory policy: 4 00:04:41.515 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.515 EAL: request: mp_malloc_sync 00:04:41.515 EAL: No shared files mode enabled, IPC is disabled 00:04:41.515 EAL: Heap on socket 0 was expanded by 18MB 00:04:41.515 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.515 EAL: request: mp_malloc_sync 00:04:41.515 EAL: No shared files mode enabled, IPC is disabled 00:04:41.515 EAL: Heap on socket 0 was shrunk by 18MB 00:04:41.515 EAL: Trying to obtain current memory policy. 00:04:41.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.515 EAL: Restoring previous memory policy: 4 00:04:41.515 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.515 EAL: request: mp_malloc_sync 00:04:41.515 EAL: No shared files mode enabled, IPC is disabled 00:04:41.515 EAL: Heap on socket 0 was expanded by 34MB 00:04:41.515 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.515 EAL: request: mp_malloc_sync 00:04:41.515 EAL: No shared files mode enabled, IPC is disabled 00:04:41.515 EAL: Heap on socket 0 was shrunk by 34MB 00:04:41.515 EAL: Trying to obtain current memory policy. 00:04:41.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.774 EAL: Restoring previous memory policy: 4 00:04:41.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.774 EAL: request: mp_malloc_sync 00:04:41.774 EAL: No shared files mode enabled, IPC is disabled 00:04:41.774 EAL: Heap on socket 0 was expanded by 66MB 00:04:41.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.774 EAL: request: mp_malloc_sync 00:04:41.774 EAL: No shared files mode enabled, IPC is disabled 00:04:41.774 EAL: Heap on socket 0 was shrunk by 66MB 00:04:41.774 EAL: Trying to obtain current memory policy. 00:04:41.774 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.774 EAL: Restoring previous memory policy: 4 00:04:41.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.774 EAL: request: mp_malloc_sync 00:04:41.774 EAL: No shared files mode enabled, IPC is disabled 00:04:41.774 EAL: Heap on socket 0 was expanded by 130MB 00:04:41.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.774 EAL: request: mp_malloc_sync 00:04:41.774 EAL: No shared files mode enabled, IPC is disabled 00:04:41.774 EAL: Heap on socket 0 was shrunk by 130MB 00:04:41.774 EAL: Trying to obtain current memory policy. 00:04:41.774 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.774 EAL: Restoring previous memory policy: 4 00:04:41.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.774 EAL: request: mp_malloc_sync 00:04:41.774 EAL: No shared files mode enabled, IPC is disabled 00:04:41.774 EAL: Heap on socket 0 was expanded by 258MB 00:04:41.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.774 EAL: request: mp_malloc_sync 00:04:41.774 EAL: No shared files mode enabled, IPC is disabled 00:04:41.774 EAL: Heap on socket 0 was shrunk by 258MB 00:04:41.774 EAL: Trying to obtain current memory policy. 00:04:41.774 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.034 EAL: Restoring previous memory policy: 4 00:04:42.034 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.034 EAL: request: mp_malloc_sync 00:04:42.034 EAL: No shared files mode enabled, IPC is disabled 00:04:42.034 EAL: Heap on socket 0 was expanded by 514MB 00:04:42.034 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.034 EAL: request: mp_malloc_sync 00:04:42.034 EAL: No shared files mode enabled, IPC is disabled 00:04:42.034 EAL: Heap on socket 0 was shrunk by 514MB 00:04:42.034 EAL: Trying to obtain current memory policy. 00:04:42.034 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.293 EAL: Restoring previous memory policy: 4 00:04:42.293 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.293 EAL: request: mp_malloc_sync 00:04:42.293 EAL: No shared files mode enabled, IPC is disabled 00:04:42.293 EAL: Heap on socket 0 was expanded by 1026MB 00:04:42.553 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.553 EAL: request: mp_malloc_sync 00:04:42.553 EAL: No shared files mode enabled, IPC is disabled 00:04:42.553 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:42.553 passed 00:04:42.553 00:04:42.553 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.553 suites 1 1 n/a 0 0 00:04:42.553 tests 2 2 2 0 0 00:04:42.553 asserts 497 497 497 0 n/a 00:04:42.553 00:04:42.553 Elapsed time = 0.955 seconds 00:04:42.553 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.553 EAL: request: mp_malloc_sync 00:04:42.553 EAL: No shared files mode enabled, IPC is disabled 00:04:42.553 EAL: Heap on socket 0 was shrunk by 2MB 00:04:42.553 EAL: No shared files mode enabled, IPC is disabled 00:04:42.553 EAL: No shared files mode enabled, IPC is disabled 00:04:42.553 EAL: No shared files mode enabled, IPC is disabled 00:04:42.553 00:04:42.553 real 0m1.077s 00:04:42.553 user 0m0.625s 00:04:42.553 sys 0m0.429s 00:04:42.553 22:50:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.553 22:50:14 -- common/autotest_common.sh@10 -- # set +x 00:04:42.553 ************************************ 00:04:42.553 END TEST env_vtophys 00:04:42.553 ************************************ 00:04:42.553 22:50:14 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:42.553 22:50:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.553 22:50:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.553 22:50:14 -- common/autotest_common.sh@10 -- # set +x 00:04:42.553 ************************************ 00:04:42.553 START TEST env_pci 00:04:42.553 ************************************ 00:04:42.553 22:50:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:42.553 00:04:42.553 00:04:42.553 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.553 http://cunit.sourceforge.net/ 00:04:42.553 00:04:42.553 00:04:42.553 Suite: pci 00:04:42.553 Test: pci_hook ...[2024-07-24 22:50:14.965330] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3020376 has claimed it 00:04:42.812 EAL: Cannot find device (10000:00:01.0) 00:04:42.812 EAL: Failed to attach device on primary process 00:04:42.812 passed 00:04:42.812 00:04:42.812 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.812 suites 1 1 n/a 0 0 00:04:42.812 tests 1 1 1 0 0 00:04:42.812 asserts 25 25 25 0 n/a 00:04:42.812 00:04:42.812 Elapsed time = 0.027 seconds 00:04:42.812 00:04:42.812 real 0m0.040s 00:04:42.812 user 0m0.010s 00:04:42.812 sys 0m0.030s 00:04:42.812 22:50:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.812 22:50:14 -- common/autotest_common.sh@10 -- # set +x 00:04:42.812 ************************************ 00:04:42.812 END TEST env_pci 00:04:42.812 ************************************ 00:04:42.812 22:50:15 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:42.812 22:50:15 -- env/env.sh@15 -- # uname 00:04:42.812 22:50:15 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:42.812 22:50:15 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:42.812 22:50:15 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:42.812 22:50:15 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:42.812 22:50:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.812 22:50:15 -- common/autotest_common.sh@10 -- # set +x 00:04:42.812 ************************************ 00:04:42.812 START TEST env_dpdk_post_init 00:04:42.812 ************************************ 00:04:42.812 22:50:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:42.812 EAL: Detected CPU lcores: 112 00:04:42.812 EAL: Detected NUMA nodes: 2 00:04:42.812 EAL: Detected shared linkage of DPDK 00:04:42.812 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:42.812 EAL: Selected IOVA mode 'VA' 00:04:42.812 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.812 EAL: VFIO support initialized 00:04:42.813 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:42.813 EAL: Using IOMMU type 1 (Type 1) 00:04:42.813 EAL: Ignore mapping IO port bar(1) 00:04:42.813 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:42.813 EAL: Ignore mapping IO port bar(1) 00:04:42.813 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:42.813 EAL: Ignore mapping IO port bar(1) 00:04:42.813 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:42.813 EAL: Ignore mapping IO port bar(1) 00:04:42.813 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:42.813 EAL: Ignore mapping IO port bar(1) 00:04:42.813 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:43.072 EAL: Ignore mapping IO port bar(1) 00:04:43.072 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:43.072 EAL: Ignore mapping IO port bar(1) 00:04:43.072 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:43.072 EAL: Ignore mapping IO port bar(1) 00:04:43.072 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:43.072 EAL: Ignore mapping IO port bar(1) 00:04:43.072 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:43.072 EAL: Ignore mapping IO port bar(1) 00:04:43.072 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:43.072 EAL: Ignore mapping IO port bar(1) 00:04:43.072 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:43.072 EAL: Ignore mapping IO port bar(1) 00:04:43.072 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:43.072 EAL: Ignore mapping IO port bar(1) 00:04:43.072 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:43.072 EAL: Ignore mapping IO port bar(1) 00:04:43.072 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:43.072 EAL: Ignore mapping IO port bar(1) 00:04:43.072 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:43.072 EAL: Ignore mapping IO port bar(1) 00:04:43.072 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:44.011 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:47.301 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:47.301 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:04:47.869 Starting DPDK initialization... 00:04:47.869 Starting SPDK post initialization... 00:04:47.869 SPDK NVMe probe 00:04:47.869 Attaching to 0000:d8:00.0 00:04:47.869 Attached to 0000:d8:00.0 00:04:47.869 Cleaning up... 00:04:47.869 00:04:47.869 real 0m4.964s 00:04:47.869 user 0m3.637s 00:04:47.869 sys 0m0.389s 00:04:47.869 22:50:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.869 22:50:20 -- common/autotest_common.sh@10 -- # set +x 00:04:47.869 ************************************ 00:04:47.869 END TEST env_dpdk_post_init 00:04:47.869 ************************************ 00:04:47.869 22:50:20 -- env/env.sh@26 -- # uname 00:04:47.869 22:50:20 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:47.869 22:50:20 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:47.869 22:50:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:47.870 22:50:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:47.870 22:50:20 -- common/autotest_common.sh@10 -- # set +x 00:04:47.870 ************************************ 00:04:47.870 START TEST env_mem_callbacks 00:04:47.870 ************************************ 00:04:47.870 22:50:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:47.870 EAL: Detected CPU lcores: 112 00:04:47.870 EAL: Detected NUMA nodes: 2 00:04:47.870 EAL: Detected shared linkage of DPDK 00:04:47.870 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:47.870 EAL: Selected IOVA mode 'VA' 00:04:47.870 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.870 EAL: VFIO support initialized 00:04:47.870 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:47.870 00:04:47.870 00:04:47.870 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.870 http://cunit.sourceforge.net/ 00:04:47.870 00:04:47.870 00:04:47.870 Suite: memory 00:04:47.870 Test: test ... 00:04:47.870 register 0x200000200000 2097152 00:04:47.870 malloc 3145728 00:04:47.870 register 0x200000400000 4194304 00:04:47.870 buf 0x200000500000 len 3145728 PASSED 00:04:47.870 malloc 64 00:04:47.870 buf 0x2000004fff40 len 64 PASSED 00:04:47.870 malloc 4194304 00:04:47.870 register 0x200000800000 6291456 00:04:47.870 buf 0x200000a00000 len 4194304 PASSED 00:04:47.870 free 0x200000500000 3145728 00:04:47.870 free 0x2000004fff40 64 00:04:47.870 unregister 0x200000400000 4194304 PASSED 00:04:47.870 free 0x200000a00000 4194304 00:04:47.870 unregister 0x200000800000 6291456 PASSED 00:04:47.870 malloc 8388608 00:04:47.870 register 0x200000400000 10485760 00:04:47.870 buf 0x200000600000 len 8388608 PASSED 00:04:47.870 free 0x200000600000 8388608 00:04:47.870 unregister 0x200000400000 10485760 PASSED 00:04:47.870 passed 00:04:47.870 00:04:47.870 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.870 suites 1 1 n/a 0 0 00:04:47.870 tests 1 1 1 0 0 00:04:47.870 asserts 15 15 15 0 n/a 00:04:47.870 00:04:47.870 Elapsed time = 0.005 seconds 00:04:47.870 00:04:47.870 real 0m0.064s 00:04:47.870 user 0m0.018s 00:04:47.870 sys 0m0.046s 00:04:47.870 22:50:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.870 22:50:20 -- common/autotest_common.sh@10 -- # set +x 00:04:47.870 ************************************ 00:04:47.870 END TEST env_mem_callbacks 00:04:47.870 ************************************ 00:04:47.870 00:04:47.870 real 0m6.649s 00:04:47.870 user 0m4.552s 00:04:47.870 sys 0m1.180s 00:04:47.870 22:50:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.870 22:50:20 -- common/autotest_common.sh@10 -- # set +x 00:04:47.870 ************************************ 00:04:47.870 END TEST env 00:04:47.870 ************************************ 00:04:47.870 22:50:20 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:47.870 22:50:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:47.870 22:50:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:47.870 22:50:20 -- common/autotest_common.sh@10 -- # set +x 00:04:47.870 ************************************ 00:04:47.870 START TEST rpc 00:04:47.870 ************************************ 00:04:47.870 22:50:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:48.129 * Looking for test storage... 00:04:48.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:48.129 22:50:20 -- rpc/rpc.sh@65 -- # spdk_pid=3021458 00:04:48.129 22:50:20 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.129 22:50:20 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:48.129 22:50:20 -- rpc/rpc.sh@67 -- # waitforlisten 3021458 00:04:48.130 22:50:20 -- common/autotest_common.sh@819 -- # '[' -z 3021458 ']' 00:04:48.130 22:50:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.130 22:50:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:48.130 22:50:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.130 22:50:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:48.130 22:50:20 -- common/autotest_common.sh@10 -- # set +x 00:04:48.130 [2024-07-24 22:50:20.370891] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:48.130 [2024-07-24 22:50:20.370940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3021458 ] 00:04:48.130 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.130 [2024-07-24 22:50:20.440799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.130 [2024-07-24 22:50:20.477447] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:48.130 [2024-07-24 22:50:20.477553] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:48.130 [2024-07-24 22:50:20.477564] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3021458' to capture a snapshot of events at runtime. 00:04:48.130 [2024-07-24 22:50:20.477573] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3021458 for offline analysis/debug. 00:04:48.130 [2024-07-24 22:50:20.477600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.068 22:50:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:49.068 22:50:21 -- common/autotest_common.sh@852 -- # return 0 00:04:49.068 22:50:21 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:49.068 22:50:21 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:49.068 22:50:21 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:49.068 22:50:21 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:49.068 22:50:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.068 22:50:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.068 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.068 ************************************ 00:04:49.068 START TEST rpc_integrity 00:04:49.068 ************************************ 00:04:49.068 22:50:21 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:49.068 22:50:21 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:49.068 22:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.068 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.068 22:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.068 22:50:21 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:49.068 22:50:21 -- rpc/rpc.sh@13 -- # jq length 00:04:49.068 22:50:21 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:49.068 22:50:21 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:49.068 22:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.068 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.068 22:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.068 22:50:21 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:49.068 22:50:21 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:49.068 22:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.068 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.068 22:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.068 22:50:21 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:49.068 { 00:04:49.068 "name": "Malloc0", 00:04:49.068 "aliases": [ 00:04:49.068 "e6dc024c-39ba-4e05-9b8a-204434f74d2c" 00:04:49.068 ], 00:04:49.068 "product_name": "Malloc disk", 00:04:49.068 "block_size": 512, 00:04:49.068 "num_blocks": 16384, 00:04:49.068 "uuid": "e6dc024c-39ba-4e05-9b8a-204434f74d2c", 00:04:49.068 "assigned_rate_limits": { 00:04:49.068 "rw_ios_per_sec": 0, 00:04:49.068 "rw_mbytes_per_sec": 0, 00:04:49.068 "r_mbytes_per_sec": 0, 00:04:49.068 "w_mbytes_per_sec": 0 00:04:49.068 }, 00:04:49.068 "claimed": false, 00:04:49.068 "zoned": false, 00:04:49.068 "supported_io_types": { 00:04:49.068 "read": true, 00:04:49.068 "write": true, 00:04:49.068 "unmap": true, 00:04:49.068 "write_zeroes": true, 00:04:49.068 "flush": true, 00:04:49.068 "reset": true, 00:04:49.068 "compare": false, 00:04:49.068 "compare_and_write": false, 00:04:49.068 "abort": true, 00:04:49.068 "nvme_admin": false, 00:04:49.068 "nvme_io": false 00:04:49.068 }, 00:04:49.068 "memory_domains": [ 00:04:49.068 { 00:04:49.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.068 "dma_device_type": 2 00:04:49.068 } 00:04:49.068 ], 00:04:49.068 "driver_specific": {} 00:04:49.068 } 00:04:49.068 ]' 00:04:49.068 22:50:21 -- rpc/rpc.sh@17 -- # jq length 00:04:49.068 22:50:21 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:49.068 22:50:21 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:49.068 22:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.068 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.068 [2024-07-24 22:50:21.293631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:49.068 [2024-07-24 22:50:21.293666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:49.068 [2024-07-24 22:50:21.293679] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a11710 00:04:49.068 [2024-07-24 22:50:21.293687] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:49.068 [2024-07-24 22:50:21.294746] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:49.068 [2024-07-24 22:50:21.294769] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:49.068 Passthru0 00:04:49.068 22:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.068 22:50:21 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:49.068 22:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.068 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.068 22:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.068 22:50:21 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:49.068 { 00:04:49.068 "name": "Malloc0", 00:04:49.068 "aliases": [ 00:04:49.068 "e6dc024c-39ba-4e05-9b8a-204434f74d2c" 00:04:49.068 ], 00:04:49.068 "product_name": "Malloc disk", 00:04:49.068 "block_size": 512, 00:04:49.068 "num_blocks": 16384, 00:04:49.068 "uuid": "e6dc024c-39ba-4e05-9b8a-204434f74d2c", 00:04:49.068 "assigned_rate_limits": { 00:04:49.068 "rw_ios_per_sec": 0, 00:04:49.068 "rw_mbytes_per_sec": 0, 00:04:49.068 "r_mbytes_per_sec": 0, 00:04:49.068 "w_mbytes_per_sec": 0 00:04:49.068 }, 00:04:49.068 "claimed": true, 00:04:49.068 "claim_type": "exclusive_write", 00:04:49.068 "zoned": false, 00:04:49.068 "supported_io_types": { 00:04:49.068 "read": true, 00:04:49.068 "write": true, 00:04:49.068 "unmap": true, 00:04:49.068 "write_zeroes": true, 00:04:49.068 "flush": true, 00:04:49.068 "reset": true, 00:04:49.068 "compare": false, 00:04:49.068 "compare_and_write": false, 00:04:49.068 "abort": true, 00:04:49.068 "nvme_admin": false, 00:04:49.068 "nvme_io": false 00:04:49.069 }, 00:04:49.069 "memory_domains": [ 00:04:49.069 { 00:04:49.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.069 "dma_device_type": 2 00:04:49.069 } 00:04:49.069 ], 00:04:49.069 "driver_specific": {} 00:04:49.069 }, 00:04:49.069 { 00:04:49.069 "name": "Passthru0", 00:04:49.069 "aliases": [ 00:04:49.069 "fefcb564-3c00-5c5f-a784-61ae246e556d" 00:04:49.069 ], 00:04:49.069 "product_name": "passthru", 00:04:49.069 "block_size": 512, 00:04:49.069 "num_blocks": 16384, 00:04:49.069 "uuid": "fefcb564-3c00-5c5f-a784-61ae246e556d", 00:04:49.069 "assigned_rate_limits": { 00:04:49.069 "rw_ios_per_sec": 0, 00:04:49.069 "rw_mbytes_per_sec": 0, 00:04:49.069 "r_mbytes_per_sec": 0, 00:04:49.069 "w_mbytes_per_sec": 0 00:04:49.069 }, 00:04:49.069 "claimed": false, 00:04:49.069 "zoned": false, 00:04:49.069 "supported_io_types": { 00:04:49.069 "read": true, 00:04:49.069 "write": true, 00:04:49.069 "unmap": true, 00:04:49.069 "write_zeroes": true, 00:04:49.069 "flush": true, 00:04:49.069 "reset": true, 00:04:49.069 "compare": false, 00:04:49.069 "compare_and_write": false, 00:04:49.069 "abort": true, 00:04:49.069 "nvme_admin": false, 00:04:49.069 "nvme_io": false 00:04:49.069 }, 00:04:49.069 "memory_domains": [ 00:04:49.069 { 00:04:49.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.069 "dma_device_type": 2 00:04:49.069 } 00:04:49.069 ], 00:04:49.069 "driver_specific": { 00:04:49.069 "passthru": { 00:04:49.069 "name": "Passthru0", 00:04:49.069 "base_bdev_name": "Malloc0" 00:04:49.069 } 00:04:49.069 } 00:04:49.069 } 00:04:49.069 ]' 00:04:49.069 22:50:21 -- rpc/rpc.sh@21 -- # jq length 00:04:49.069 22:50:21 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:49.069 22:50:21 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:49.069 22:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.069 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.069 22:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.069 22:50:21 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:49.069 22:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.069 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.069 22:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.069 22:50:21 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:49.069 22:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.069 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.069 22:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.069 22:50:21 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:49.069 22:50:21 -- rpc/rpc.sh@26 -- # jq length 00:04:49.069 22:50:21 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:49.069 00:04:49.069 real 0m0.283s 00:04:49.069 user 0m0.168s 00:04:49.069 sys 0m0.056s 00:04:49.069 22:50:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.069 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.069 ************************************ 00:04:49.069 END TEST rpc_integrity 00:04:49.069 ************************************ 00:04:49.069 22:50:21 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:49.069 22:50:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.069 22:50:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.069 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.069 ************************************ 00:04:49.069 START TEST rpc_plugins 00:04:49.069 ************************************ 00:04:49.069 22:50:21 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:04:49.069 22:50:21 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:49.069 22:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.069 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.069 22:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.069 22:50:21 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:49.069 22:50:21 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:49.069 22:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.069 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.329 22:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.329 22:50:21 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:49.329 { 00:04:49.329 "name": "Malloc1", 00:04:49.329 "aliases": [ 00:04:49.329 "c9235df9-c0e8-407a-a2d9-f05dc63a1fd2" 00:04:49.329 ], 00:04:49.329 "product_name": "Malloc disk", 00:04:49.329 "block_size": 4096, 00:04:49.329 "num_blocks": 256, 00:04:49.329 "uuid": "c9235df9-c0e8-407a-a2d9-f05dc63a1fd2", 00:04:49.329 "assigned_rate_limits": { 00:04:49.329 "rw_ios_per_sec": 0, 00:04:49.329 "rw_mbytes_per_sec": 0, 00:04:49.329 "r_mbytes_per_sec": 0, 00:04:49.329 "w_mbytes_per_sec": 0 00:04:49.329 }, 00:04:49.329 "claimed": false, 00:04:49.329 "zoned": false, 00:04:49.329 "supported_io_types": { 00:04:49.329 "read": true, 00:04:49.329 "write": true, 00:04:49.329 "unmap": true, 00:04:49.329 "write_zeroes": true, 00:04:49.329 "flush": true, 00:04:49.329 "reset": true, 00:04:49.329 "compare": false, 00:04:49.329 "compare_and_write": false, 00:04:49.329 "abort": true, 00:04:49.329 "nvme_admin": false, 00:04:49.329 "nvme_io": false 00:04:49.329 }, 00:04:49.329 "memory_domains": [ 00:04:49.329 { 00:04:49.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.329 "dma_device_type": 2 00:04:49.329 } 00:04:49.329 ], 00:04:49.329 "driver_specific": {} 00:04:49.329 } 00:04:49.329 ]' 00:04:49.329 22:50:21 -- rpc/rpc.sh@32 -- # jq length 00:04:49.329 22:50:21 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:49.329 22:50:21 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:49.329 22:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.329 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.329 22:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.329 22:50:21 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:49.329 22:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.329 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.329 22:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.329 22:50:21 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:49.329 22:50:21 -- rpc/rpc.sh@36 -- # jq length 00:04:49.329 22:50:21 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:49.329 00:04:49.329 real 0m0.142s 00:04:49.329 user 0m0.089s 00:04:49.329 sys 0m0.018s 00:04:49.329 22:50:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.329 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.329 ************************************ 00:04:49.329 END TEST rpc_plugins 00:04:49.329 ************************************ 00:04:49.329 22:50:21 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:49.329 22:50:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.329 22:50:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.329 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.329 ************************************ 00:04:49.329 START TEST rpc_trace_cmd_test 00:04:49.329 ************************************ 00:04:49.329 22:50:21 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:04:49.329 22:50:21 -- rpc/rpc.sh@40 -- # local info 00:04:49.329 22:50:21 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:49.329 22:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.329 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.329 22:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.329 22:50:21 -- rpc/rpc.sh@42 -- # info='{ 00:04:49.329 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3021458", 00:04:49.329 "tpoint_group_mask": "0x8", 00:04:49.329 "iscsi_conn": { 00:04:49.329 "mask": "0x2", 00:04:49.329 "tpoint_mask": "0x0" 00:04:49.329 }, 00:04:49.329 "scsi": { 00:04:49.329 "mask": "0x4", 00:04:49.329 "tpoint_mask": "0x0" 00:04:49.329 }, 00:04:49.329 "bdev": { 00:04:49.329 "mask": "0x8", 00:04:49.329 "tpoint_mask": "0xffffffffffffffff" 00:04:49.329 }, 00:04:49.329 "nvmf_rdma": { 00:04:49.329 "mask": "0x10", 00:04:49.329 "tpoint_mask": "0x0" 00:04:49.329 }, 00:04:49.329 "nvmf_tcp": { 00:04:49.329 "mask": "0x20", 00:04:49.329 "tpoint_mask": "0x0" 00:04:49.329 }, 00:04:49.329 "ftl": { 00:04:49.329 "mask": "0x40", 00:04:49.329 "tpoint_mask": "0x0" 00:04:49.329 }, 00:04:49.329 "blobfs": { 00:04:49.329 "mask": "0x80", 00:04:49.329 "tpoint_mask": "0x0" 00:04:49.329 }, 00:04:49.329 "dsa": { 00:04:49.329 "mask": "0x200", 00:04:49.329 "tpoint_mask": "0x0" 00:04:49.329 }, 00:04:49.329 "thread": { 00:04:49.329 "mask": "0x400", 00:04:49.329 "tpoint_mask": "0x0" 00:04:49.329 }, 00:04:49.329 "nvme_pcie": { 00:04:49.329 "mask": "0x800", 00:04:49.329 "tpoint_mask": "0x0" 00:04:49.329 }, 00:04:49.329 "iaa": { 00:04:49.329 "mask": "0x1000", 00:04:49.329 "tpoint_mask": "0x0" 00:04:49.329 }, 00:04:49.329 "nvme_tcp": { 00:04:49.329 "mask": "0x2000", 00:04:49.329 "tpoint_mask": "0x0" 00:04:49.329 }, 00:04:49.329 "bdev_nvme": { 00:04:49.329 "mask": "0x4000", 00:04:49.329 "tpoint_mask": "0x0" 00:04:49.329 } 00:04:49.329 }' 00:04:49.329 22:50:21 -- rpc/rpc.sh@43 -- # jq length 00:04:49.329 22:50:21 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:49.329 22:50:21 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:49.329 22:50:21 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:49.589 22:50:21 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:49.589 22:50:21 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:49.589 22:50:21 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:49.589 22:50:21 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:49.589 22:50:21 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:49.589 22:50:21 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:49.589 00:04:49.589 real 0m0.215s 00:04:49.589 user 0m0.172s 00:04:49.589 sys 0m0.034s 00:04:49.589 22:50:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.589 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.589 ************************************ 00:04:49.589 END TEST rpc_trace_cmd_test 00:04:49.589 ************************************ 00:04:49.589 22:50:21 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:49.589 22:50:21 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:49.589 22:50:21 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:49.589 22:50:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.589 22:50:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.589 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.589 ************************************ 00:04:49.589 START TEST rpc_daemon_integrity 00:04:49.589 ************************************ 00:04:49.589 22:50:21 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:49.589 22:50:21 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:49.589 22:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.589 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.589 22:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.589 22:50:21 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:49.589 22:50:21 -- rpc/rpc.sh@13 -- # jq length 00:04:49.589 22:50:21 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:49.589 22:50:21 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:49.589 22:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.589 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.589 22:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.589 22:50:21 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:49.589 22:50:21 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:49.589 22:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.589 22:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.589 22:50:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.589 22:50:22 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:49.589 { 00:04:49.589 "name": "Malloc2", 00:04:49.589 "aliases": [ 00:04:49.589 "9c11da3e-b144-406a-babe-549dab2e4ff4" 00:04:49.589 ], 00:04:49.589 "product_name": "Malloc disk", 00:04:49.589 "block_size": 512, 00:04:49.589 "num_blocks": 16384, 00:04:49.589 "uuid": "9c11da3e-b144-406a-babe-549dab2e4ff4", 00:04:49.589 "assigned_rate_limits": { 00:04:49.589 "rw_ios_per_sec": 0, 00:04:49.589 "rw_mbytes_per_sec": 0, 00:04:49.589 "r_mbytes_per_sec": 0, 00:04:49.589 "w_mbytes_per_sec": 0 00:04:49.589 }, 00:04:49.589 "claimed": false, 00:04:49.589 "zoned": false, 00:04:49.589 "supported_io_types": { 00:04:49.589 "read": true, 00:04:49.589 "write": true, 00:04:49.589 "unmap": true, 00:04:49.589 "write_zeroes": true, 00:04:49.589 "flush": true, 00:04:49.589 "reset": true, 00:04:49.589 "compare": false, 00:04:49.589 "compare_and_write": false, 00:04:49.589 "abort": true, 00:04:49.589 "nvme_admin": false, 00:04:49.589 "nvme_io": false 00:04:49.589 }, 00:04:49.589 "memory_domains": [ 00:04:49.589 { 00:04:49.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.589 "dma_device_type": 2 00:04:49.589 } 00:04:49.589 ], 00:04:49.589 "driver_specific": {} 00:04:49.589 } 00:04:49.589 ]' 00:04:49.590 22:50:22 -- rpc/rpc.sh@17 -- # jq length 00:04:49.849 22:50:22 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:49.849 22:50:22 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:49.849 22:50:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.849 22:50:22 -- common/autotest_common.sh@10 -- # set +x 00:04:49.849 [2024-07-24 22:50:22.063719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:49.849 [2024-07-24 22:50:22.063748] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:49.849 [2024-07-24 22:50:22.063766] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a110b0 00:04:49.849 [2024-07-24 22:50:22.063775] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:49.849 [2024-07-24 22:50:22.064740] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:49.849 [2024-07-24 22:50:22.064762] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:49.849 Passthru0 00:04:49.849 22:50:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.849 22:50:22 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:49.849 22:50:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.849 22:50:22 -- common/autotest_common.sh@10 -- # set +x 00:04:49.849 22:50:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.849 22:50:22 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:49.849 { 00:04:49.849 "name": "Malloc2", 00:04:49.849 "aliases": [ 00:04:49.849 "9c11da3e-b144-406a-babe-549dab2e4ff4" 00:04:49.849 ], 00:04:49.849 "product_name": "Malloc disk", 00:04:49.849 "block_size": 512, 00:04:49.849 "num_blocks": 16384, 00:04:49.849 "uuid": "9c11da3e-b144-406a-babe-549dab2e4ff4", 00:04:49.849 "assigned_rate_limits": { 00:04:49.849 "rw_ios_per_sec": 0, 00:04:49.849 "rw_mbytes_per_sec": 0, 00:04:49.849 "r_mbytes_per_sec": 0, 00:04:49.849 "w_mbytes_per_sec": 0 00:04:49.849 }, 00:04:49.849 "claimed": true, 00:04:49.849 "claim_type": "exclusive_write", 00:04:49.849 "zoned": false, 00:04:49.849 "supported_io_types": { 00:04:49.849 "read": true, 00:04:49.849 "write": true, 00:04:49.849 "unmap": true, 00:04:49.849 "write_zeroes": true, 00:04:49.849 "flush": true, 00:04:49.849 "reset": true, 00:04:49.849 "compare": false, 00:04:49.849 "compare_and_write": false, 00:04:49.849 "abort": true, 00:04:49.849 "nvme_admin": false, 00:04:49.849 "nvme_io": false 00:04:49.849 }, 00:04:49.849 "memory_domains": [ 00:04:49.849 { 00:04:49.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.849 "dma_device_type": 2 00:04:49.849 } 00:04:49.849 ], 00:04:49.849 "driver_specific": {} 00:04:49.849 }, 00:04:49.849 { 00:04:49.849 "name": "Passthru0", 00:04:49.849 "aliases": [ 00:04:49.849 "fba21965-a5b3-5b90-b798-743906b52fa5" 00:04:49.849 ], 00:04:49.849 "product_name": "passthru", 00:04:49.849 "block_size": 512, 00:04:49.849 "num_blocks": 16384, 00:04:49.849 "uuid": "fba21965-a5b3-5b90-b798-743906b52fa5", 00:04:49.849 "assigned_rate_limits": { 00:04:49.849 "rw_ios_per_sec": 0, 00:04:49.849 "rw_mbytes_per_sec": 0, 00:04:49.849 "r_mbytes_per_sec": 0, 00:04:49.849 "w_mbytes_per_sec": 0 00:04:49.849 }, 00:04:49.849 "claimed": false, 00:04:49.849 "zoned": false, 00:04:49.849 "supported_io_types": { 00:04:49.849 "read": true, 00:04:49.849 "write": true, 00:04:49.849 "unmap": true, 00:04:49.849 "write_zeroes": true, 00:04:49.849 "flush": true, 00:04:49.849 "reset": true, 00:04:49.849 "compare": false, 00:04:49.849 "compare_and_write": false, 00:04:49.849 "abort": true, 00:04:49.849 "nvme_admin": false, 00:04:49.849 "nvme_io": false 00:04:49.849 }, 00:04:49.849 "memory_domains": [ 00:04:49.849 { 00:04:49.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.849 "dma_device_type": 2 00:04:49.849 } 00:04:49.849 ], 00:04:49.849 "driver_specific": { 00:04:49.849 "passthru": { 00:04:49.849 "name": "Passthru0", 00:04:49.849 "base_bdev_name": "Malloc2" 00:04:49.849 } 00:04:49.849 } 00:04:49.849 } 00:04:49.849 ]' 00:04:49.849 22:50:22 -- rpc/rpc.sh@21 -- # jq length 00:04:49.849 22:50:22 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:49.849 22:50:22 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:49.849 22:50:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.849 22:50:22 -- common/autotest_common.sh@10 -- # set +x 00:04:49.849 22:50:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.849 22:50:22 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:49.849 22:50:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.849 22:50:22 -- common/autotest_common.sh@10 -- # set +x 00:04:49.849 22:50:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.849 22:50:22 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:49.849 22:50:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.849 22:50:22 -- common/autotest_common.sh@10 -- # set +x 00:04:49.849 22:50:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.849 22:50:22 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:49.849 22:50:22 -- rpc/rpc.sh@26 -- # jq length 00:04:49.849 22:50:22 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:49.849 00:04:49.849 real 0m0.286s 00:04:49.849 user 0m0.166s 00:04:49.849 sys 0m0.058s 00:04:49.849 22:50:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.849 22:50:22 -- common/autotest_common.sh@10 -- # set +x 00:04:49.849 ************************************ 00:04:49.849 END TEST rpc_daemon_integrity 00:04:49.849 ************************************ 00:04:49.849 22:50:22 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:49.849 22:50:22 -- rpc/rpc.sh@84 -- # killprocess 3021458 00:04:49.849 22:50:22 -- common/autotest_common.sh@926 -- # '[' -z 3021458 ']' 00:04:49.849 22:50:22 -- common/autotest_common.sh@930 -- # kill -0 3021458 00:04:49.849 22:50:22 -- common/autotest_common.sh@931 -- # uname 00:04:49.850 22:50:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:49.850 22:50:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3021458 00:04:50.108 22:50:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:50.108 22:50:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:50.108 22:50:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3021458' 00:04:50.108 killing process with pid 3021458 00:04:50.108 22:50:22 -- common/autotest_common.sh@945 -- # kill 3021458 00:04:50.108 22:50:22 -- common/autotest_common.sh@950 -- # wait 3021458 00:04:50.368 00:04:50.368 real 0m2.382s 00:04:50.368 user 0m2.987s 00:04:50.368 sys 0m0.731s 00:04:50.368 22:50:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.368 22:50:22 -- common/autotest_common.sh@10 -- # set +x 00:04:50.368 ************************************ 00:04:50.368 END TEST rpc 00:04:50.368 ************************************ 00:04:50.368 22:50:22 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:50.368 22:50:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.368 22:50:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.368 22:50:22 -- common/autotest_common.sh@10 -- # set +x 00:04:50.368 ************************************ 00:04:50.368 START TEST rpc_client 00:04:50.368 ************************************ 00:04:50.368 22:50:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:50.368 * Looking for test storage... 00:04:50.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:50.368 22:50:22 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:50.368 OK 00:04:50.368 22:50:22 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:50.368 00:04:50.368 real 0m0.127s 00:04:50.368 user 0m0.053s 00:04:50.368 sys 0m0.084s 00:04:50.368 22:50:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.368 22:50:22 -- common/autotest_common.sh@10 -- # set +x 00:04:50.368 ************************************ 00:04:50.368 END TEST rpc_client 00:04:50.368 ************************************ 00:04:50.628 22:50:22 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:50.628 22:50:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.628 22:50:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.628 22:50:22 -- common/autotest_common.sh@10 -- # set +x 00:04:50.628 ************************************ 00:04:50.628 START TEST json_config 00:04:50.628 ************************************ 00:04:50.628 22:50:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:50.628 22:50:22 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:50.628 22:50:22 -- nvmf/common.sh@7 -- # uname -s 00:04:50.628 22:50:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.628 22:50:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.628 22:50:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.628 22:50:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.628 22:50:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.628 22:50:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.628 22:50:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.628 22:50:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.628 22:50:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.628 22:50:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.628 22:50:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:04:50.628 22:50:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:04:50.628 22:50:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.628 22:50:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.628 22:50:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.628 22:50:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:50.628 22:50:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.628 22:50:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.628 22:50:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.628 22:50:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.628 22:50:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.628 22:50:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.628 22:50:22 -- paths/export.sh@5 -- # export PATH 00:04:50.628 22:50:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.628 22:50:22 -- nvmf/common.sh@46 -- # : 0 00:04:50.628 22:50:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:50.628 22:50:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:50.628 22:50:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:50.628 22:50:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.628 22:50:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.628 22:50:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:50.628 22:50:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:50.628 22:50:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:50.628 22:50:22 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:50.628 22:50:22 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:50.628 22:50:22 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:50.628 22:50:22 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:50.628 22:50:22 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:50.628 22:50:22 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:50.628 22:50:22 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:50.628 22:50:22 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:50.628 22:50:22 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:50.628 22:50:22 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:50.628 22:50:22 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:50.628 22:50:22 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:50.628 22:50:22 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:50.628 22:50:22 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:50.628 22:50:22 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:50.628 INFO: JSON configuration test init 00:04:50.628 22:50:22 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:50.628 22:50:22 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:50.628 22:50:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:50.628 22:50:22 -- common/autotest_common.sh@10 -- # set +x 00:04:50.628 22:50:22 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:50.628 22:50:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:50.628 22:50:22 -- common/autotest_common.sh@10 -- # set +x 00:04:50.628 22:50:22 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:50.628 22:50:22 -- json_config/json_config.sh@98 -- # local app=target 00:04:50.628 22:50:22 -- json_config/json_config.sh@99 -- # shift 00:04:50.628 22:50:22 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:50.628 22:50:22 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:50.628 22:50:22 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:50.628 22:50:22 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:50.628 22:50:22 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:50.628 22:50:22 -- json_config/json_config.sh@111 -- # app_pid[$app]=3022195 00:04:50.628 22:50:22 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:50.628 Waiting for target to run... 00:04:50.628 22:50:22 -- json_config/json_config.sh@114 -- # waitforlisten 3022195 /var/tmp/spdk_tgt.sock 00:04:50.628 22:50:22 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:50.628 22:50:22 -- common/autotest_common.sh@819 -- # '[' -z 3022195 ']' 00:04:50.628 22:50:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.628 22:50:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:50.628 22:50:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.628 22:50:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:50.628 22:50:22 -- common/autotest_common.sh@10 -- # set +x 00:04:50.628 [2024-07-24 22:50:23.002135] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:50.628 [2024-07-24 22:50:23.002190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3022195 ] 00:04:50.628 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.197 [2024-07-24 22:50:23.449191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.197 [2024-07-24 22:50:23.478020] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:51.197 [2024-07-24 22:50:23.478121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.456 22:50:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:51.456 22:50:23 -- common/autotest_common.sh@852 -- # return 0 00:04:51.456 22:50:23 -- json_config/json_config.sh@115 -- # echo '' 00:04:51.456 00:04:51.456 22:50:23 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:51.456 22:50:23 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:51.456 22:50:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:51.456 22:50:23 -- common/autotest_common.sh@10 -- # set +x 00:04:51.456 22:50:23 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:51.456 22:50:23 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:51.456 22:50:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:51.456 22:50:23 -- common/autotest_common.sh@10 -- # set +x 00:04:51.456 22:50:23 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:51.456 22:50:23 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:51.456 22:50:23 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:54.778 22:50:26 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:54.778 22:50:26 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:54.778 22:50:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:54.778 22:50:26 -- common/autotest_common.sh@10 -- # set +x 00:04:54.778 22:50:26 -- json_config/json_config.sh@48 -- # local ret=0 00:04:54.778 22:50:26 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:54.778 22:50:26 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:54.778 22:50:26 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:54.778 22:50:26 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:54.778 22:50:26 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:54.778 22:50:27 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:54.778 22:50:27 -- json_config/json_config.sh@51 -- # local get_types 00:04:54.778 22:50:27 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:54.778 22:50:27 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:54.778 22:50:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:54.778 22:50:27 -- common/autotest_common.sh@10 -- # set +x 00:04:54.778 22:50:27 -- json_config/json_config.sh@58 -- # return 0 00:04:54.778 22:50:27 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:54.778 22:50:27 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:54.778 22:50:27 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:54.778 22:50:27 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:54.778 22:50:27 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:54.778 22:50:27 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:54.778 22:50:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:54.778 22:50:27 -- common/autotest_common.sh@10 -- # set +x 00:04:54.778 22:50:27 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:54.778 22:50:27 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:54.778 22:50:27 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:54.778 22:50:27 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:54.778 22:50:27 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:55.037 MallocForNvmf0 00:04:55.037 22:50:27 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:55.037 22:50:27 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:55.037 MallocForNvmf1 00:04:55.037 22:50:27 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:55.037 22:50:27 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:55.295 [2024-07-24 22:50:27.566328] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:55.296 22:50:27 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:55.296 22:50:27 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:55.554 22:50:27 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:55.554 22:50:27 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:55.554 22:50:27 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:55.554 22:50:27 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:55.814 22:50:28 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:55.814 22:50:28 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:55.814 [2024-07-24 22:50:28.180317] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:55.814 22:50:28 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:55.814 22:50:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:55.814 22:50:28 -- common/autotest_common.sh@10 -- # set +x 00:04:55.814 22:50:28 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:55.814 22:50:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:55.814 22:50:28 -- common/autotest_common.sh@10 -- # set +x 00:04:56.073 22:50:28 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:56.073 22:50:28 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:56.073 22:50:28 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:56.073 MallocBdevForConfigChangeCheck 00:04:56.073 22:50:28 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:56.073 22:50:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:56.073 22:50:28 -- common/autotest_common.sh@10 -- # set +x 00:04:56.073 22:50:28 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:56.073 22:50:28 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:56.641 22:50:28 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:56.641 INFO: shutting down applications... 00:04:56.641 22:50:28 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:56.641 22:50:28 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:56.641 22:50:28 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:56.641 22:50:28 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:58.546 Calling clear_iscsi_subsystem 00:04:58.546 Calling clear_nvmf_subsystem 00:04:58.546 Calling clear_nbd_subsystem 00:04:58.546 Calling clear_ublk_subsystem 00:04:58.546 Calling clear_vhost_blk_subsystem 00:04:58.546 Calling clear_vhost_scsi_subsystem 00:04:58.546 Calling clear_scheduler_subsystem 00:04:58.546 Calling clear_bdev_subsystem 00:04:58.546 Calling clear_accel_subsystem 00:04:58.546 Calling clear_vmd_subsystem 00:04:58.546 Calling clear_sock_subsystem 00:04:58.546 Calling clear_iobuf_subsystem 00:04:58.546 22:50:30 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:58.546 22:50:30 -- json_config/json_config.sh@396 -- # count=100 00:04:58.546 22:50:30 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:58.546 22:50:30 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:58.546 22:50:30 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:58.546 22:50:30 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:58.806 22:50:31 -- json_config/json_config.sh@398 -- # break 00:04:58.806 22:50:31 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:58.806 22:50:31 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:58.806 22:50:31 -- json_config/json_config.sh@120 -- # local app=target 00:04:58.806 22:50:31 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:58.806 22:50:31 -- json_config/json_config.sh@124 -- # [[ -n 3022195 ]] 00:04:58.806 22:50:31 -- json_config/json_config.sh@127 -- # kill -SIGINT 3022195 00:04:58.806 22:50:31 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:58.806 22:50:31 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:58.806 22:50:31 -- json_config/json_config.sh@130 -- # kill -0 3022195 00:04:58.806 22:50:31 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:59.374 22:50:31 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:59.374 22:50:31 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:59.374 22:50:31 -- json_config/json_config.sh@130 -- # kill -0 3022195 00:04:59.374 22:50:31 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:59.374 22:50:31 -- json_config/json_config.sh@132 -- # break 00:04:59.374 22:50:31 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:59.374 22:50:31 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:59.374 SPDK target shutdown done 00:04:59.374 22:50:31 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:59.374 INFO: relaunching applications... 00:04:59.374 22:50:31 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:59.374 22:50:31 -- json_config/json_config.sh@98 -- # local app=target 00:04:59.374 22:50:31 -- json_config/json_config.sh@99 -- # shift 00:04:59.374 22:50:31 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:59.374 22:50:31 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:59.374 22:50:31 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:59.374 22:50:31 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:59.374 22:50:31 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:59.374 22:50:31 -- json_config/json_config.sh@111 -- # app_pid[$app]=3023686 00:04:59.374 22:50:31 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:59.374 Waiting for target to run... 00:04:59.374 22:50:31 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:59.374 22:50:31 -- json_config/json_config.sh@114 -- # waitforlisten 3023686 /var/tmp/spdk_tgt.sock 00:04:59.374 22:50:31 -- common/autotest_common.sh@819 -- # '[' -z 3023686 ']' 00:04:59.374 22:50:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:59.374 22:50:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:59.374 22:50:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:59.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:59.374 22:50:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:59.374 22:50:31 -- common/autotest_common.sh@10 -- # set +x 00:04:59.374 [2024-07-24 22:50:31.683581] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:59.374 [2024-07-24 22:50:31.683645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3023686 ] 00:04:59.374 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.634 [2024-07-24 22:50:31.970055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.634 [2024-07-24 22:50:31.988642] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:59.634 [2024-07-24 22:50:31.988748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.925 [2024-07-24 22:50:34.992642] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:02.925 [2024-07-24 22:50:35.025008] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:02.925 22:50:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:02.925 22:50:35 -- common/autotest_common.sh@852 -- # return 0 00:05:02.925 22:50:35 -- json_config/json_config.sh@115 -- # echo '' 00:05:02.925 00:05:02.925 22:50:35 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:02.925 22:50:35 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:02.925 INFO: Checking if target configuration is the same... 00:05:02.925 22:50:35 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:02.925 22:50:35 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:02.925 22:50:35 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:02.925 + '[' 2 -ne 2 ']' 00:05:02.925 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:02.925 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:02.925 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:02.925 +++ basename /dev/fd/62 00:05:02.925 ++ mktemp /tmp/62.XXX 00:05:02.925 + tmp_file_1=/tmp/62.bbg 00:05:02.925 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:02.925 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:02.925 + tmp_file_2=/tmp/spdk_tgt_config.json.bUw 00:05:02.925 + ret=0 00:05:02.925 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:03.184 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:03.184 + diff -u /tmp/62.bbg /tmp/spdk_tgt_config.json.bUw 00:05:03.184 + echo 'INFO: JSON config files are the same' 00:05:03.184 INFO: JSON config files are the same 00:05:03.184 + rm /tmp/62.bbg /tmp/spdk_tgt_config.json.bUw 00:05:03.184 + exit 0 00:05:03.184 22:50:35 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:03.184 22:50:35 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:03.184 INFO: changing configuration and checking if this can be detected... 00:05:03.184 22:50:35 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:03.184 22:50:35 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:03.184 22:50:35 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:03.184 22:50:35 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:03.184 22:50:35 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:03.184 + '[' 2 -ne 2 ']' 00:05:03.184 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:03.444 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:03.444 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:03.444 +++ basename /dev/fd/62 00:05:03.444 ++ mktemp /tmp/62.XXX 00:05:03.444 + tmp_file_1=/tmp/62.1C0 00:05:03.444 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:03.444 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:03.444 + tmp_file_2=/tmp/spdk_tgt_config.json.FOL 00:05:03.444 + ret=0 00:05:03.444 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:03.703 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:03.703 + diff -u /tmp/62.1C0 /tmp/spdk_tgt_config.json.FOL 00:05:03.703 + ret=1 00:05:03.703 + echo '=== Start of file: /tmp/62.1C0 ===' 00:05:03.703 + cat /tmp/62.1C0 00:05:03.703 + echo '=== End of file: /tmp/62.1C0 ===' 00:05:03.703 + echo '' 00:05:03.703 + echo '=== Start of file: /tmp/spdk_tgt_config.json.FOL ===' 00:05:03.703 + cat /tmp/spdk_tgt_config.json.FOL 00:05:03.703 + echo '=== End of file: /tmp/spdk_tgt_config.json.FOL ===' 00:05:03.703 + echo '' 00:05:03.703 + rm /tmp/62.1C0 /tmp/spdk_tgt_config.json.FOL 00:05:03.703 + exit 1 00:05:03.703 22:50:35 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:03.703 INFO: configuration change detected. 00:05:03.703 22:50:35 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:03.703 22:50:35 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:03.703 22:50:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:03.703 22:50:35 -- common/autotest_common.sh@10 -- # set +x 00:05:03.703 22:50:35 -- json_config/json_config.sh@360 -- # local ret=0 00:05:03.703 22:50:35 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:03.703 22:50:35 -- json_config/json_config.sh@370 -- # [[ -n 3023686 ]] 00:05:03.703 22:50:35 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:03.703 22:50:35 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:03.703 22:50:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:03.703 22:50:35 -- common/autotest_common.sh@10 -- # set +x 00:05:03.703 22:50:35 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:03.703 22:50:35 -- json_config/json_config.sh@246 -- # uname -s 00:05:03.703 22:50:35 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:03.703 22:50:35 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:03.703 22:50:35 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:03.703 22:50:35 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:03.703 22:50:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:03.703 22:50:35 -- common/autotest_common.sh@10 -- # set +x 00:05:03.703 22:50:35 -- json_config/json_config.sh@376 -- # killprocess 3023686 00:05:03.703 22:50:35 -- common/autotest_common.sh@926 -- # '[' -z 3023686 ']' 00:05:03.703 22:50:35 -- common/autotest_common.sh@930 -- # kill -0 3023686 00:05:03.703 22:50:35 -- common/autotest_common.sh@931 -- # uname 00:05:03.703 22:50:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:03.703 22:50:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3023686 00:05:03.703 22:50:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:03.703 22:50:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:03.703 22:50:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3023686' 00:05:03.703 killing process with pid 3023686 00:05:03.703 22:50:36 -- common/autotest_common.sh@945 -- # kill 3023686 00:05:03.703 22:50:36 -- common/autotest_common.sh@950 -- # wait 3023686 00:05:05.609 22:50:38 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.609 22:50:38 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:05.609 22:50:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:05.609 22:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:05.870 22:50:38 -- json_config/json_config.sh@381 -- # return 0 00:05:05.870 22:50:38 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:05.870 INFO: Success 00:05:05.870 00:05:05.870 real 0m15.240s 00:05:05.870 user 0m15.753s 00:05:05.870 sys 0m2.207s 00:05:05.870 22:50:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.870 22:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:05.870 ************************************ 00:05:05.870 END TEST json_config 00:05:05.870 ************************************ 00:05:05.870 22:50:38 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:05.870 22:50:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:05.870 22:50:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:05.870 22:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:05.870 ************************************ 00:05:05.870 START TEST json_config_extra_key 00:05:05.870 ************************************ 00:05:05.870 22:50:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:05.870 22:50:38 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:05.870 22:50:38 -- nvmf/common.sh@7 -- # uname -s 00:05:05.870 22:50:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:05.870 22:50:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:05.870 22:50:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:05.870 22:50:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:05.870 22:50:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:05.870 22:50:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:05.870 22:50:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:05.870 22:50:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:05.870 22:50:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:05.870 22:50:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:05.870 22:50:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:05:05.870 22:50:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:05:05.870 22:50:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:05.870 22:50:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:05.870 22:50:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:05.870 22:50:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:05.870 22:50:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:05.870 22:50:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:05.870 22:50:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:05.870 22:50:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.870 22:50:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.870 22:50:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.870 22:50:38 -- paths/export.sh@5 -- # export PATH 00:05:05.870 22:50:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.870 22:50:38 -- nvmf/common.sh@46 -- # : 0 00:05:05.870 22:50:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:05.870 22:50:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:05.870 22:50:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:05.870 22:50:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:05.870 22:50:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:05.870 22:50:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:05.870 22:50:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:05.870 22:50:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:05.870 22:50:38 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:05.870 22:50:38 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:05.870 22:50:38 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:05.870 22:50:38 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:05.870 22:50:38 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:05.870 22:50:38 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:05.870 22:50:38 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:05.870 22:50:38 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:05.870 22:50:38 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:05.870 22:50:38 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:05.870 INFO: launching applications... 00:05:05.870 22:50:38 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:05.870 22:50:38 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:05.870 22:50:38 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:05.870 22:50:38 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:05.870 22:50:38 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:05.870 22:50:38 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=3025118 00:05:05.870 22:50:38 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:05.870 Waiting for target to run... 00:05:05.870 22:50:38 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 3025118 /var/tmp/spdk_tgt.sock 00:05:05.870 22:50:38 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:05.870 22:50:38 -- common/autotest_common.sh@819 -- # '[' -z 3025118 ']' 00:05:05.870 22:50:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:05.870 22:50:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:05.870 22:50:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:05.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:05.870 22:50:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:05.870 22:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:05.870 [2024-07-24 22:50:38.275238] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:05.870 [2024-07-24 22:50:38.275298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3025118 ] 00:05:06.129 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.388 [2024-07-24 22:50:38.714372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.388 [2024-07-24 22:50:38.741427] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:06.388 [2024-07-24 22:50:38.741533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.646 22:50:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:06.646 22:50:39 -- common/autotest_common.sh@852 -- # return 0 00:05:06.647 22:50:39 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:06.647 00:05:06.647 22:50:39 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:06.647 INFO: shutting down applications... 00:05:06.647 22:50:39 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:06.647 22:50:39 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:06.647 22:50:39 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:06.647 22:50:39 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 3025118 ]] 00:05:06.647 22:50:39 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 3025118 00:05:06.647 22:50:39 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:06.647 22:50:39 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:06.647 22:50:39 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3025118 00:05:06.647 22:50:39 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:07.215 22:50:39 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:07.215 22:50:39 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:07.215 22:50:39 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3025118 00:05:07.215 22:50:39 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:07.215 22:50:39 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:07.215 22:50:39 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:07.215 22:50:39 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:07.215 SPDK target shutdown done 00:05:07.215 22:50:39 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:07.215 Success 00:05:07.215 00:05:07.215 real 0m1.431s 00:05:07.215 user 0m0.983s 00:05:07.215 sys 0m0.556s 00:05:07.215 22:50:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.215 22:50:39 -- common/autotest_common.sh@10 -- # set +x 00:05:07.215 ************************************ 00:05:07.215 END TEST json_config_extra_key 00:05:07.215 ************************************ 00:05:07.215 22:50:39 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:07.215 22:50:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:07.215 22:50:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.215 22:50:39 -- common/autotest_common.sh@10 -- # set +x 00:05:07.215 ************************************ 00:05:07.215 START TEST alias_rpc 00:05:07.215 ************************************ 00:05:07.215 22:50:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:07.475 * Looking for test storage... 00:05:07.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:07.475 22:50:39 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:07.475 22:50:39 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3025430 00:05:07.475 22:50:39 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.475 22:50:39 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3025430 00:05:07.475 22:50:39 -- common/autotest_common.sh@819 -- # '[' -z 3025430 ']' 00:05:07.475 22:50:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.475 22:50:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:07.475 22:50:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.475 22:50:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:07.475 22:50:39 -- common/autotest_common.sh@10 -- # set +x 00:05:07.475 [2024-07-24 22:50:39.763359] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:07.475 [2024-07-24 22:50:39.763420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3025430 ] 00:05:07.475 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.475 [2024-07-24 22:50:39.835254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.475 [2024-07-24 22:50:39.871396] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:07.475 [2024-07-24 22:50:39.871531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.412 22:50:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:08.412 22:50:40 -- common/autotest_common.sh@852 -- # return 0 00:05:08.412 22:50:40 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:08.412 22:50:40 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3025430 00:05:08.412 22:50:40 -- common/autotest_common.sh@926 -- # '[' -z 3025430 ']' 00:05:08.412 22:50:40 -- common/autotest_common.sh@930 -- # kill -0 3025430 00:05:08.412 22:50:40 -- common/autotest_common.sh@931 -- # uname 00:05:08.412 22:50:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:08.413 22:50:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3025430 00:05:08.413 22:50:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:08.413 22:50:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:08.413 22:50:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3025430' 00:05:08.413 killing process with pid 3025430 00:05:08.413 22:50:40 -- common/autotest_common.sh@945 -- # kill 3025430 00:05:08.413 22:50:40 -- common/autotest_common.sh@950 -- # wait 3025430 00:05:08.672 00:05:08.672 real 0m1.474s 00:05:08.672 user 0m1.528s 00:05:08.672 sys 0m0.464s 00:05:08.672 22:50:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.672 22:50:41 -- common/autotest_common.sh@10 -- # set +x 00:05:08.672 ************************************ 00:05:08.672 END TEST alias_rpc 00:05:08.672 ************************************ 00:05:08.930 22:50:41 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:08.930 22:50:41 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:08.930 22:50:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:08.930 22:50:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:08.930 22:50:41 -- common/autotest_common.sh@10 -- # set +x 00:05:08.930 ************************************ 00:05:08.930 START TEST spdkcli_tcp 00:05:08.930 ************************************ 00:05:08.930 22:50:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:08.930 * Looking for test storage... 00:05:08.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:08.930 22:50:41 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:08.930 22:50:41 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:08.930 22:50:41 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:08.930 22:50:41 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:08.930 22:50:41 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:08.930 22:50:41 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:08.930 22:50:41 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:08.930 22:50:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:08.930 22:50:41 -- common/autotest_common.sh@10 -- # set +x 00:05:08.930 22:50:41 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3025756 00:05:08.930 22:50:41 -- spdkcli/tcp.sh@27 -- # waitforlisten 3025756 00:05:08.930 22:50:41 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:08.930 22:50:41 -- common/autotest_common.sh@819 -- # '[' -z 3025756 ']' 00:05:08.930 22:50:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.930 22:50:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:08.930 22:50:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.930 22:50:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:08.930 22:50:41 -- common/autotest_common.sh@10 -- # set +x 00:05:08.930 [2024-07-24 22:50:41.287054] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:08.930 [2024-07-24 22:50:41.287110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3025756 ] 00:05:08.930 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.930 [2024-07-24 22:50:41.355208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.188 [2024-07-24 22:50:41.391606] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:09.188 [2024-07-24 22:50:41.391789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.188 [2024-07-24 22:50:41.391793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.755 22:50:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:09.755 22:50:42 -- common/autotest_common.sh@852 -- # return 0 00:05:09.755 22:50:42 -- spdkcli/tcp.sh@31 -- # socat_pid=3025768 00:05:09.755 22:50:42 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:09.755 22:50:42 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:10.014 [ 00:05:10.014 "bdev_malloc_delete", 00:05:10.014 "bdev_malloc_create", 00:05:10.014 "bdev_null_resize", 00:05:10.014 "bdev_null_delete", 00:05:10.014 "bdev_null_create", 00:05:10.014 "bdev_nvme_cuse_unregister", 00:05:10.014 "bdev_nvme_cuse_register", 00:05:10.014 "bdev_opal_new_user", 00:05:10.014 "bdev_opal_set_lock_state", 00:05:10.014 "bdev_opal_delete", 00:05:10.014 "bdev_opal_get_info", 00:05:10.014 "bdev_opal_create", 00:05:10.014 "bdev_nvme_opal_revert", 00:05:10.014 "bdev_nvme_opal_init", 00:05:10.014 "bdev_nvme_send_cmd", 00:05:10.014 "bdev_nvme_get_path_iostat", 00:05:10.014 "bdev_nvme_get_mdns_discovery_info", 00:05:10.014 "bdev_nvme_stop_mdns_discovery", 00:05:10.014 "bdev_nvme_start_mdns_discovery", 00:05:10.014 "bdev_nvme_set_multipath_policy", 00:05:10.014 "bdev_nvme_set_preferred_path", 00:05:10.014 "bdev_nvme_get_io_paths", 00:05:10.014 "bdev_nvme_remove_error_injection", 00:05:10.014 "bdev_nvme_add_error_injection", 00:05:10.014 "bdev_nvme_get_discovery_info", 00:05:10.014 "bdev_nvme_stop_discovery", 00:05:10.014 "bdev_nvme_start_discovery", 00:05:10.014 "bdev_nvme_get_controller_health_info", 00:05:10.014 "bdev_nvme_disable_controller", 00:05:10.014 "bdev_nvme_enable_controller", 00:05:10.014 "bdev_nvme_reset_controller", 00:05:10.014 "bdev_nvme_get_transport_statistics", 00:05:10.014 "bdev_nvme_apply_firmware", 00:05:10.014 "bdev_nvme_detach_controller", 00:05:10.014 "bdev_nvme_get_controllers", 00:05:10.014 "bdev_nvme_attach_controller", 00:05:10.014 "bdev_nvme_set_hotplug", 00:05:10.014 "bdev_nvme_set_options", 00:05:10.014 "bdev_passthru_delete", 00:05:10.014 "bdev_passthru_create", 00:05:10.014 "bdev_lvol_grow_lvstore", 00:05:10.014 "bdev_lvol_get_lvols", 00:05:10.014 "bdev_lvol_get_lvstores", 00:05:10.014 "bdev_lvol_delete", 00:05:10.014 "bdev_lvol_set_read_only", 00:05:10.014 "bdev_lvol_resize", 00:05:10.014 "bdev_lvol_decouple_parent", 00:05:10.014 "bdev_lvol_inflate", 00:05:10.014 "bdev_lvol_rename", 00:05:10.014 "bdev_lvol_clone_bdev", 00:05:10.014 "bdev_lvol_clone", 00:05:10.014 "bdev_lvol_snapshot", 00:05:10.014 "bdev_lvol_create", 00:05:10.014 "bdev_lvol_delete_lvstore", 00:05:10.014 "bdev_lvol_rename_lvstore", 00:05:10.014 "bdev_lvol_create_lvstore", 00:05:10.014 "bdev_raid_set_options", 00:05:10.014 "bdev_raid_remove_base_bdev", 00:05:10.014 "bdev_raid_add_base_bdev", 00:05:10.014 "bdev_raid_delete", 00:05:10.014 "bdev_raid_create", 00:05:10.014 "bdev_raid_get_bdevs", 00:05:10.014 "bdev_error_inject_error", 00:05:10.014 "bdev_error_delete", 00:05:10.014 "bdev_error_create", 00:05:10.014 "bdev_split_delete", 00:05:10.014 "bdev_split_create", 00:05:10.014 "bdev_delay_delete", 00:05:10.014 "bdev_delay_create", 00:05:10.014 "bdev_delay_update_latency", 00:05:10.014 "bdev_zone_block_delete", 00:05:10.014 "bdev_zone_block_create", 00:05:10.014 "blobfs_create", 00:05:10.014 "blobfs_detect", 00:05:10.014 "blobfs_set_cache_size", 00:05:10.014 "bdev_aio_delete", 00:05:10.014 "bdev_aio_rescan", 00:05:10.014 "bdev_aio_create", 00:05:10.014 "bdev_ftl_set_property", 00:05:10.014 "bdev_ftl_get_properties", 00:05:10.014 "bdev_ftl_get_stats", 00:05:10.014 "bdev_ftl_unmap", 00:05:10.014 "bdev_ftl_unload", 00:05:10.014 "bdev_ftl_delete", 00:05:10.014 "bdev_ftl_load", 00:05:10.014 "bdev_ftl_create", 00:05:10.014 "bdev_virtio_attach_controller", 00:05:10.014 "bdev_virtio_scsi_get_devices", 00:05:10.014 "bdev_virtio_detach_controller", 00:05:10.014 "bdev_virtio_blk_set_hotplug", 00:05:10.014 "bdev_iscsi_delete", 00:05:10.014 "bdev_iscsi_create", 00:05:10.014 "bdev_iscsi_set_options", 00:05:10.014 "accel_error_inject_error", 00:05:10.014 "ioat_scan_accel_module", 00:05:10.014 "dsa_scan_accel_module", 00:05:10.014 "iaa_scan_accel_module", 00:05:10.014 "vfu_virtio_create_scsi_endpoint", 00:05:10.014 "vfu_virtio_scsi_remove_target", 00:05:10.014 "vfu_virtio_scsi_add_target", 00:05:10.014 "vfu_virtio_create_blk_endpoint", 00:05:10.014 "vfu_virtio_delete_endpoint", 00:05:10.014 "iscsi_set_options", 00:05:10.014 "iscsi_get_auth_groups", 00:05:10.014 "iscsi_auth_group_remove_secret", 00:05:10.014 "iscsi_auth_group_add_secret", 00:05:10.014 "iscsi_delete_auth_group", 00:05:10.014 "iscsi_create_auth_group", 00:05:10.014 "iscsi_set_discovery_auth", 00:05:10.014 "iscsi_get_options", 00:05:10.014 "iscsi_target_node_request_logout", 00:05:10.014 "iscsi_target_node_set_redirect", 00:05:10.014 "iscsi_target_node_set_auth", 00:05:10.014 "iscsi_target_node_add_lun", 00:05:10.014 "iscsi_get_connections", 00:05:10.014 "iscsi_portal_group_set_auth", 00:05:10.014 "iscsi_start_portal_group", 00:05:10.014 "iscsi_delete_portal_group", 00:05:10.014 "iscsi_create_portal_group", 00:05:10.014 "iscsi_get_portal_groups", 00:05:10.014 "iscsi_delete_target_node", 00:05:10.014 "iscsi_target_node_remove_pg_ig_maps", 00:05:10.014 "iscsi_target_node_add_pg_ig_maps", 00:05:10.014 "iscsi_create_target_node", 00:05:10.014 "iscsi_get_target_nodes", 00:05:10.014 "iscsi_delete_initiator_group", 00:05:10.014 "iscsi_initiator_group_remove_initiators", 00:05:10.014 "iscsi_initiator_group_add_initiators", 00:05:10.014 "iscsi_create_initiator_group", 00:05:10.014 "iscsi_get_initiator_groups", 00:05:10.014 "nvmf_set_crdt", 00:05:10.014 "nvmf_set_config", 00:05:10.014 "nvmf_set_max_subsystems", 00:05:10.014 "nvmf_subsystem_get_listeners", 00:05:10.014 "nvmf_subsystem_get_qpairs", 00:05:10.015 "nvmf_subsystem_get_controllers", 00:05:10.015 "nvmf_get_stats", 00:05:10.015 "nvmf_get_transports", 00:05:10.015 "nvmf_create_transport", 00:05:10.015 "nvmf_get_targets", 00:05:10.015 "nvmf_delete_target", 00:05:10.015 "nvmf_create_target", 00:05:10.015 "nvmf_subsystem_allow_any_host", 00:05:10.015 "nvmf_subsystem_remove_host", 00:05:10.015 "nvmf_subsystem_add_host", 00:05:10.015 "nvmf_subsystem_remove_ns", 00:05:10.015 "nvmf_subsystem_add_ns", 00:05:10.015 "nvmf_subsystem_listener_set_ana_state", 00:05:10.015 "nvmf_discovery_get_referrals", 00:05:10.015 "nvmf_discovery_remove_referral", 00:05:10.015 "nvmf_discovery_add_referral", 00:05:10.015 "nvmf_subsystem_remove_listener", 00:05:10.015 "nvmf_subsystem_add_listener", 00:05:10.015 "nvmf_delete_subsystem", 00:05:10.015 "nvmf_create_subsystem", 00:05:10.015 "nvmf_get_subsystems", 00:05:10.015 "env_dpdk_get_mem_stats", 00:05:10.015 "nbd_get_disks", 00:05:10.015 "nbd_stop_disk", 00:05:10.015 "nbd_start_disk", 00:05:10.015 "ublk_recover_disk", 00:05:10.015 "ublk_get_disks", 00:05:10.015 "ublk_stop_disk", 00:05:10.015 "ublk_start_disk", 00:05:10.015 "ublk_destroy_target", 00:05:10.015 "ublk_create_target", 00:05:10.015 "virtio_blk_create_transport", 00:05:10.015 "virtio_blk_get_transports", 00:05:10.015 "vhost_controller_set_coalescing", 00:05:10.015 "vhost_get_controllers", 00:05:10.015 "vhost_delete_controller", 00:05:10.015 "vhost_create_blk_controller", 00:05:10.015 "vhost_scsi_controller_remove_target", 00:05:10.015 "vhost_scsi_controller_add_target", 00:05:10.015 "vhost_start_scsi_controller", 00:05:10.015 "vhost_create_scsi_controller", 00:05:10.015 "thread_set_cpumask", 00:05:10.015 "framework_get_scheduler", 00:05:10.015 "framework_set_scheduler", 00:05:10.015 "framework_get_reactors", 00:05:10.015 "thread_get_io_channels", 00:05:10.015 "thread_get_pollers", 00:05:10.015 "thread_get_stats", 00:05:10.015 "framework_monitor_context_switch", 00:05:10.015 "spdk_kill_instance", 00:05:10.015 "log_enable_timestamps", 00:05:10.015 "log_get_flags", 00:05:10.015 "log_clear_flag", 00:05:10.015 "log_set_flag", 00:05:10.015 "log_get_level", 00:05:10.015 "log_set_level", 00:05:10.015 "log_get_print_level", 00:05:10.015 "log_set_print_level", 00:05:10.015 "framework_enable_cpumask_locks", 00:05:10.015 "framework_disable_cpumask_locks", 00:05:10.015 "framework_wait_init", 00:05:10.015 "framework_start_init", 00:05:10.015 "scsi_get_devices", 00:05:10.015 "bdev_get_histogram", 00:05:10.015 "bdev_enable_histogram", 00:05:10.015 "bdev_set_qos_limit", 00:05:10.015 "bdev_set_qd_sampling_period", 00:05:10.015 "bdev_get_bdevs", 00:05:10.015 "bdev_reset_iostat", 00:05:10.015 "bdev_get_iostat", 00:05:10.015 "bdev_examine", 00:05:10.015 "bdev_wait_for_examine", 00:05:10.015 "bdev_set_options", 00:05:10.015 "notify_get_notifications", 00:05:10.015 "notify_get_types", 00:05:10.015 "accel_get_stats", 00:05:10.015 "accel_set_options", 00:05:10.015 "accel_set_driver", 00:05:10.015 "accel_crypto_key_destroy", 00:05:10.015 "accel_crypto_keys_get", 00:05:10.015 "accel_crypto_key_create", 00:05:10.015 "accel_assign_opc", 00:05:10.015 "accel_get_module_info", 00:05:10.015 "accel_get_opc_assignments", 00:05:10.015 "vmd_rescan", 00:05:10.015 "vmd_remove_device", 00:05:10.015 "vmd_enable", 00:05:10.015 "sock_set_default_impl", 00:05:10.015 "sock_impl_set_options", 00:05:10.015 "sock_impl_get_options", 00:05:10.015 "iobuf_get_stats", 00:05:10.015 "iobuf_set_options", 00:05:10.015 "framework_get_pci_devices", 00:05:10.015 "framework_get_config", 00:05:10.015 "framework_get_subsystems", 00:05:10.015 "vfu_tgt_set_base_path", 00:05:10.015 "trace_get_info", 00:05:10.015 "trace_get_tpoint_group_mask", 00:05:10.015 "trace_disable_tpoint_group", 00:05:10.015 "trace_enable_tpoint_group", 00:05:10.015 "trace_clear_tpoint_mask", 00:05:10.015 "trace_set_tpoint_mask", 00:05:10.015 "spdk_get_version", 00:05:10.015 "rpc_get_methods" 00:05:10.015 ] 00:05:10.015 22:50:42 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:10.015 22:50:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:10.015 22:50:42 -- common/autotest_common.sh@10 -- # set +x 00:05:10.015 22:50:42 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:10.015 22:50:42 -- spdkcli/tcp.sh@38 -- # killprocess 3025756 00:05:10.015 22:50:42 -- common/autotest_common.sh@926 -- # '[' -z 3025756 ']' 00:05:10.015 22:50:42 -- common/autotest_common.sh@930 -- # kill -0 3025756 00:05:10.015 22:50:42 -- common/autotest_common.sh@931 -- # uname 00:05:10.015 22:50:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:10.015 22:50:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3025756 00:05:10.015 22:50:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:10.015 22:50:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:10.015 22:50:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3025756' 00:05:10.015 killing process with pid 3025756 00:05:10.015 22:50:42 -- common/autotest_common.sh@945 -- # kill 3025756 00:05:10.015 22:50:42 -- common/autotest_common.sh@950 -- # wait 3025756 00:05:10.274 00:05:10.274 real 0m1.486s 00:05:10.274 user 0m2.715s 00:05:10.274 sys 0m0.496s 00:05:10.274 22:50:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.274 22:50:42 -- common/autotest_common.sh@10 -- # set +x 00:05:10.274 ************************************ 00:05:10.274 END TEST spdkcli_tcp 00:05:10.274 ************************************ 00:05:10.274 22:50:42 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:10.274 22:50:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:10.274 22:50:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:10.274 22:50:42 -- common/autotest_common.sh@10 -- # set +x 00:05:10.274 ************************************ 00:05:10.274 START TEST dpdk_mem_utility 00:05:10.274 ************************************ 00:05:10.274 22:50:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:10.533 * Looking for test storage... 00:05:10.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:10.533 22:50:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:10.533 22:50:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3026085 00:05:10.533 22:50:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3026085 00:05:10.533 22:50:42 -- common/autotest_common.sh@819 -- # '[' -z 3026085 ']' 00:05:10.533 22:50:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.533 22:50:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:10.533 22:50:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.533 22:50:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:10.533 22:50:42 -- common/autotest_common.sh@10 -- # set +x 00:05:10.533 22:50:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.533 [2024-07-24 22:50:42.815946] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:10.533 [2024-07-24 22:50:42.816002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3026085 ] 00:05:10.533 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.533 [2024-07-24 22:50:42.887847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.533 [2024-07-24 22:50:42.924833] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:10.533 [2024-07-24 22:50:42.924944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.506 22:50:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:11.506 22:50:43 -- common/autotest_common.sh@852 -- # return 0 00:05:11.506 22:50:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:11.506 22:50:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:11.506 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:11.506 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:05:11.506 { 00:05:11.506 "filename": "/tmp/spdk_mem_dump.txt" 00:05:11.506 } 00:05:11.506 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:11.506 22:50:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:11.506 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:11.506 1 heaps totaling size 814.000000 MiB 00:05:11.506 size: 814.000000 MiB heap id: 0 00:05:11.506 end heaps---------- 00:05:11.506 8 mempools totaling size 598.116089 MiB 00:05:11.506 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:11.506 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:11.506 size: 84.521057 MiB name: bdev_io_3026085 00:05:11.506 size: 51.011292 MiB name: evtpool_3026085 00:05:11.506 size: 50.003479 MiB name: msgpool_3026085 00:05:11.506 size: 21.763794 MiB name: PDU_Pool 00:05:11.506 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:11.506 size: 0.026123 MiB name: Session_Pool 00:05:11.506 end mempools------- 00:05:11.506 6 memzones totaling size 4.142822 MiB 00:05:11.506 size: 1.000366 MiB name: RG_ring_0_3026085 00:05:11.506 size: 1.000366 MiB name: RG_ring_1_3026085 00:05:11.506 size: 1.000366 MiB name: RG_ring_4_3026085 00:05:11.506 size: 1.000366 MiB name: RG_ring_5_3026085 00:05:11.506 size: 0.125366 MiB name: RG_ring_2_3026085 00:05:11.506 size: 0.015991 MiB name: RG_ring_3_3026085 00:05:11.506 end memzones------- 00:05:11.506 22:50:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:11.506 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:11.506 list of free elements. size: 12.519348 MiB 00:05:11.506 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:11.506 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:11.506 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:11.506 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:11.506 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:11.506 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:11.506 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:11.506 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:11.506 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:11.506 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:11.506 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:11.506 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:11.506 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:11.506 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:11.506 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:11.506 list of standard malloc elements. size: 199.218079 MiB 00:05:11.506 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:11.506 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:11.506 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:11.506 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:11.506 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:11.506 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:11.506 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:11.506 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:11.506 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:11.506 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:11.506 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:11.506 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:11.506 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:11.506 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:11.506 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:11.506 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:11.506 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:11.506 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:11.506 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:11.506 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:11.506 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:11.506 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:11.506 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:11.506 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:11.506 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:11.506 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:11.506 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:11.506 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:11.506 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:11.506 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:11.506 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:11.506 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:11.506 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:11.506 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:11.506 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:11.506 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:11.506 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:11.506 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:11.506 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:11.506 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:11.506 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:11.506 list of memzone associated elements. size: 602.262573 MiB 00:05:11.506 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:11.506 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:11.506 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:11.506 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:11.507 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:11.507 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3026085_0 00:05:11.507 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:11.507 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3026085_0 00:05:11.507 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:11.507 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3026085_0 00:05:11.507 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:11.507 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:11.507 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:11.507 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:11.507 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:11.507 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3026085 00:05:11.507 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:11.507 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3026085 00:05:11.507 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:11.507 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3026085 00:05:11.507 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:11.507 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:11.507 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:11.507 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:11.507 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:11.507 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:11.507 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:11.507 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:11.507 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:11.507 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3026085 00:05:11.507 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:11.507 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3026085 00:05:11.507 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:11.507 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3026085 00:05:11.507 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:11.507 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3026085 00:05:11.507 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:11.507 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3026085 00:05:11.507 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:11.507 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:11.507 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:11.507 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:11.507 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:11.507 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:11.507 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:11.507 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3026085 00:05:11.507 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:11.507 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:11.507 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:11.507 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:11.507 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:11.507 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3026085 00:05:11.507 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:11.507 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:11.507 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:11.507 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3026085 00:05:11.507 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:11.507 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3026085 00:05:11.507 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:11.507 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:11.507 22:50:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:11.507 22:50:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3026085 00:05:11.507 22:50:43 -- common/autotest_common.sh@926 -- # '[' -z 3026085 ']' 00:05:11.507 22:50:43 -- common/autotest_common.sh@930 -- # kill -0 3026085 00:05:11.507 22:50:43 -- common/autotest_common.sh@931 -- # uname 00:05:11.507 22:50:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:11.507 22:50:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3026085 00:05:11.507 22:50:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:11.507 22:50:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:11.507 22:50:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3026085' 00:05:11.507 killing process with pid 3026085 00:05:11.507 22:50:43 -- common/autotest_common.sh@945 -- # kill 3026085 00:05:11.507 22:50:43 -- common/autotest_common.sh@950 -- # wait 3026085 00:05:11.766 00:05:11.766 real 0m1.375s 00:05:11.766 user 0m1.396s 00:05:11.767 sys 0m0.434s 00:05:11.767 22:50:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.767 22:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:11.767 ************************************ 00:05:11.767 END TEST dpdk_mem_utility 00:05:11.767 ************************************ 00:05:11.767 22:50:44 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:11.767 22:50:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:11.767 22:50:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:11.767 22:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:11.767 ************************************ 00:05:11.767 START TEST event 00:05:11.767 ************************************ 00:05:11.767 22:50:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:11.767 * Looking for test storage... 00:05:11.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:11.767 22:50:44 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:11.767 22:50:44 -- bdev/nbd_common.sh@6 -- # set -e 00:05:11.767 22:50:44 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:11.767 22:50:44 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:11.767 22:50:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:11.767 22:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:11.767 ************************************ 00:05:11.767 START TEST event_perf 00:05:11.767 ************************************ 00:05:11.767 22:50:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:12.026 Running I/O for 1 seconds...[2024-07-24 22:50:44.219511] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:12.026 [2024-07-24 22:50:44.219603] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3026367 ] 00:05:12.026 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.026 [2024-07-24 22:50:44.292251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:12.026 [2024-07-24 22:50:44.331828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.026 [2024-07-24 22:50:44.331844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.026 [2024-07-24 22:50:44.331945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.026 [2024-07-24 22:50:44.331947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.962 Running I/O for 1 seconds... 00:05:12.962 lcore 0: 205462 00:05:12.962 lcore 1: 205463 00:05:12.962 lcore 2: 205463 00:05:12.962 lcore 3: 205463 00:05:12.962 done. 00:05:12.962 00:05:12.962 real 0m1.196s 00:05:12.962 user 0m4.100s 00:05:12.962 sys 0m0.093s 00:05:12.962 22:50:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.963 22:50:45 -- common/autotest_common.sh@10 -- # set +x 00:05:12.963 ************************************ 00:05:12.963 END TEST event_perf 00:05:12.963 ************************************ 00:05:13.221 22:50:45 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:13.221 22:50:45 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:13.221 22:50:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.222 22:50:45 -- common/autotest_common.sh@10 -- # set +x 00:05:13.222 ************************************ 00:05:13.222 START TEST event_reactor 00:05:13.222 ************************************ 00:05:13.222 22:50:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:13.222 [2024-07-24 22:50:45.462455] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:13.222 [2024-07-24 22:50:45.462538] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3026503 ] 00:05:13.222 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.222 [2024-07-24 22:50:45.535823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.222 [2024-07-24 22:50:45.571228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.599 test_start 00:05:14.599 oneshot 00:05:14.599 tick 100 00:05:14.599 tick 100 00:05:14.599 tick 250 00:05:14.599 tick 100 00:05:14.599 tick 100 00:05:14.599 tick 100 00:05:14.599 tick 250 00:05:14.599 tick 500 00:05:14.599 tick 100 00:05:14.599 tick 100 00:05:14.599 tick 250 00:05:14.599 tick 100 00:05:14.599 tick 100 00:05:14.599 test_end 00:05:14.599 00:05:14.599 real 0m1.188s 00:05:14.599 user 0m1.099s 00:05:14.599 sys 0m0.084s 00:05:14.599 22:50:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.599 22:50:46 -- common/autotest_common.sh@10 -- # set +x 00:05:14.599 ************************************ 00:05:14.599 END TEST event_reactor 00:05:14.599 ************************************ 00:05:14.599 22:50:46 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:14.599 22:50:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:14.599 22:50:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.599 22:50:46 -- common/autotest_common.sh@10 -- # set +x 00:05:14.599 ************************************ 00:05:14.599 START TEST event_reactor_perf 00:05:14.599 ************************************ 00:05:14.599 22:50:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:14.599 [2024-07-24 22:50:46.699755] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:14.599 [2024-07-24 22:50:46.699848] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3026731 ] 00:05:14.599 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.599 [2024-07-24 22:50:46.773054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.599 [2024-07-24 22:50:46.807093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.537 test_start 00:05:15.537 test_end 00:05:15.537 Performance: 517216 events per second 00:05:15.537 00:05:15.537 real 0m1.188s 00:05:15.537 user 0m1.100s 00:05:15.537 sys 0m0.085s 00:05:15.537 22:50:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.537 22:50:47 -- common/autotest_common.sh@10 -- # set +x 00:05:15.537 ************************************ 00:05:15.537 END TEST event_reactor_perf 00:05:15.537 ************************************ 00:05:15.537 22:50:47 -- event/event.sh@49 -- # uname -s 00:05:15.537 22:50:47 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:15.537 22:50:47 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:15.537 22:50:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:15.537 22:50:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.537 22:50:47 -- common/autotest_common.sh@10 -- # set +x 00:05:15.537 ************************************ 00:05:15.537 START TEST event_scheduler 00:05:15.537 ************************************ 00:05:15.537 22:50:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:15.796 * Looking for test storage... 00:05:15.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:15.796 22:50:48 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:15.796 22:50:48 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3027036 00:05:15.796 22:50:48 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.796 22:50:48 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:15.796 22:50:48 -- scheduler/scheduler.sh@37 -- # waitforlisten 3027036 00:05:15.796 22:50:48 -- common/autotest_common.sh@819 -- # '[' -z 3027036 ']' 00:05:15.796 22:50:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.796 22:50:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:15.796 22:50:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.796 22:50:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:15.796 22:50:48 -- common/autotest_common.sh@10 -- # set +x 00:05:15.796 [2024-07-24 22:50:48.064217] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:15.796 [2024-07-24 22:50:48.064275] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027036 ] 00:05:15.796 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.796 [2024-07-24 22:50:48.132640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:15.796 [2024-07-24 22:50:48.170350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.796 [2024-07-24 22:50:48.170435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.796 [2024-07-24 22:50:48.170517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.796 [2024-07-24 22:50:48.170519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.733 22:50:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:16.733 22:50:48 -- common/autotest_common.sh@852 -- # return 0 00:05:16.733 22:50:48 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:16.733 22:50:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.733 22:50:48 -- common/autotest_common.sh@10 -- # set +x 00:05:16.733 POWER: Env isn't set yet! 00:05:16.733 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:16.733 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:16.733 POWER: Cannot set governor of lcore 0 to userspace 00:05:16.733 POWER: Attempting to initialise PSTAT power management... 00:05:16.733 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:16.733 POWER: Initialized successfully for lcore 0 power management 00:05:16.733 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:16.733 POWER: Initialized successfully for lcore 1 power management 00:05:16.733 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:16.733 POWER: Initialized successfully for lcore 2 power management 00:05:16.733 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:16.733 POWER: Initialized successfully for lcore 3 power management 00:05:16.733 [2024-07-24 22:50:48.903901] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:16.733 [2024-07-24 22:50:48.903917] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:16.733 [2024-07-24 22:50:48.903930] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:16.733 22:50:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.733 22:50:48 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:16.733 22:50:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.733 22:50:48 -- common/autotest_common.sh@10 -- # set +x 00:05:16.733 [2024-07-24 22:50:48.967525] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:16.733 22:50:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.733 22:50:48 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:16.733 22:50:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.733 22:50:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.733 22:50:48 -- common/autotest_common.sh@10 -- # set +x 00:05:16.733 ************************************ 00:05:16.733 START TEST scheduler_create_thread 00:05:16.733 ************************************ 00:05:16.733 22:50:48 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:16.733 22:50:48 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:16.733 22:50:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.733 22:50:48 -- common/autotest_common.sh@10 -- # set +x 00:05:16.733 2 00:05:16.733 22:50:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.733 22:50:48 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:16.733 22:50:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.733 22:50:48 -- common/autotest_common.sh@10 -- # set +x 00:05:16.733 3 00:05:16.733 22:50:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.733 22:50:49 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:16.733 22:50:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.733 22:50:49 -- common/autotest_common.sh@10 -- # set +x 00:05:16.733 4 00:05:16.733 22:50:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.733 22:50:49 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:16.733 22:50:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.733 22:50:49 -- common/autotest_common.sh@10 -- # set +x 00:05:16.733 5 00:05:16.733 22:50:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.733 22:50:49 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:16.733 22:50:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.733 22:50:49 -- common/autotest_common.sh@10 -- # set +x 00:05:16.733 6 00:05:16.733 22:50:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.733 22:50:49 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:16.733 22:50:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.733 22:50:49 -- common/autotest_common.sh@10 -- # set +x 00:05:16.733 7 00:05:16.733 22:50:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.733 22:50:49 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:16.733 22:50:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.733 22:50:49 -- common/autotest_common.sh@10 -- # set +x 00:05:16.733 8 00:05:16.733 22:50:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.733 22:50:49 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:16.733 22:50:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.733 22:50:49 -- common/autotest_common.sh@10 -- # set +x 00:05:16.733 9 00:05:16.733 22:50:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.733 22:50:49 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:16.733 22:50:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.733 22:50:49 -- common/autotest_common.sh@10 -- # set +x 00:05:16.733 10 00:05:16.733 22:50:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.733 22:50:49 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:16.733 22:50:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.733 22:50:49 -- common/autotest_common.sh@10 -- # set +x 00:05:16.733 22:50:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.733 22:50:49 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:16.733 22:50:49 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:16.733 22:50:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.733 22:50:49 -- common/autotest_common.sh@10 -- # set +x 00:05:17.671 22:50:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:17.671 22:50:49 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:17.671 22:50:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:17.671 22:50:49 -- common/autotest_common.sh@10 -- # set +x 00:05:19.049 22:50:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.049 22:50:51 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:19.049 22:50:51 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:19.049 22:50:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.049 22:50:51 -- common/autotest_common.sh@10 -- # set +x 00:05:19.986 22:50:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.986 00:05:19.986 real 0m3.382s 00:05:19.986 user 0m0.018s 00:05:19.986 sys 0m0.012s 00:05:19.986 22:50:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.986 22:50:52 -- common/autotest_common.sh@10 -- # set +x 00:05:19.986 ************************************ 00:05:19.986 END TEST scheduler_create_thread 00:05:19.986 ************************************ 00:05:19.986 22:50:52 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:19.986 22:50:52 -- scheduler/scheduler.sh@46 -- # killprocess 3027036 00:05:19.986 22:50:52 -- common/autotest_common.sh@926 -- # '[' -z 3027036 ']' 00:05:19.986 22:50:52 -- common/autotest_common.sh@930 -- # kill -0 3027036 00:05:19.986 22:50:52 -- common/autotest_common.sh@931 -- # uname 00:05:19.986 22:50:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:19.986 22:50:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3027036 00:05:20.245 22:50:52 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:20.245 22:50:52 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:20.245 22:50:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3027036' 00:05:20.245 killing process with pid 3027036 00:05:20.245 22:50:52 -- common/autotest_common.sh@945 -- # kill 3027036 00:05:20.245 22:50:52 -- common/autotest_common.sh@950 -- # wait 3027036 00:05:20.503 [2024-07-24 22:50:52.739340] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:20.503 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:20.503 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:20.503 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:20.504 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:20.504 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:20.504 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:20.504 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:20.504 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:20.762 00:05:20.762 real 0m5.039s 00:05:20.762 user 0m10.430s 00:05:20.762 sys 0m0.402s 00:05:20.762 22:50:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.762 22:50:52 -- common/autotest_common.sh@10 -- # set +x 00:05:20.762 ************************************ 00:05:20.762 END TEST event_scheduler 00:05:20.762 ************************************ 00:05:20.762 22:50:53 -- event/event.sh@51 -- # modprobe -n nbd 00:05:20.762 22:50:53 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:20.762 22:50:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.762 22:50:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.762 22:50:53 -- common/autotest_common.sh@10 -- # set +x 00:05:20.762 ************************************ 00:05:20.762 START TEST app_repeat 00:05:20.762 ************************************ 00:05:20.762 22:50:53 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:20.762 22:50:53 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.762 22:50:53 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.762 22:50:53 -- event/event.sh@13 -- # local nbd_list 00:05:20.762 22:50:53 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.762 22:50:53 -- event/event.sh@14 -- # local bdev_list 00:05:20.762 22:50:53 -- event/event.sh@15 -- # local repeat_times=4 00:05:20.762 22:50:53 -- event/event.sh@17 -- # modprobe nbd 00:05:20.762 22:50:53 -- event/event.sh@19 -- # repeat_pid=3027897 00:05:20.762 22:50:53 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.762 22:50:53 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:20.762 22:50:53 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3027897' 00:05:20.762 Process app_repeat pid: 3027897 00:05:20.762 22:50:53 -- event/event.sh@23 -- # for i in {0..2} 00:05:20.762 22:50:53 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:20.762 spdk_app_start Round 0 00:05:20.762 22:50:53 -- event/event.sh@25 -- # waitforlisten 3027897 /var/tmp/spdk-nbd.sock 00:05:20.762 22:50:53 -- common/autotest_common.sh@819 -- # '[' -z 3027897 ']' 00:05:20.762 22:50:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.763 22:50:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:20.763 22:50:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.763 22:50:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:20.763 22:50:53 -- common/autotest_common.sh@10 -- # set +x 00:05:20.763 [2024-07-24 22:50:53.046792] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:20.763 [2024-07-24 22:50:53.046856] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027897 ] 00:05:20.763 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.763 [2024-07-24 22:50:53.120915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.763 [2024-07-24 22:50:53.158487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.763 [2024-07-24 22:50:53.158490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.700 22:50:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:21.700 22:50:53 -- common/autotest_common.sh@852 -- # return 0 00:05:21.700 22:50:53 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.700 Malloc0 00:05:21.700 22:50:54 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.959 Malloc1 00:05:21.959 22:50:54 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.959 22:50:54 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.959 22:50:54 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.959 22:50:54 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.959 22:50:54 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.959 22:50:54 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.959 22:50:54 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.959 22:50:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.959 22:50:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.959 22:50:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.959 22:50:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.959 22:50:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.959 22:50:54 -- bdev/nbd_common.sh@12 -- # local i 00:05:21.959 22:50:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.959 22:50:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.959 22:50:54 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.218 /dev/nbd0 00:05:22.218 22:50:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.218 22:50:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.218 22:50:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:22.218 22:50:54 -- common/autotest_common.sh@857 -- # local i 00:05:22.218 22:50:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:22.218 22:50:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:22.218 22:50:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:22.218 22:50:54 -- common/autotest_common.sh@861 -- # break 00:05:22.218 22:50:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:22.218 22:50:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:22.218 22:50:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.218 1+0 records in 00:05:22.218 1+0 records out 00:05:22.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236931 s, 17.3 MB/s 00:05:22.218 22:50:54 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.218 22:50:54 -- common/autotest_common.sh@874 -- # size=4096 00:05:22.218 22:50:54 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.218 22:50:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:22.218 22:50:54 -- common/autotest_common.sh@877 -- # return 0 00:05:22.218 22:50:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.218 22:50:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.218 22:50:54 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.218 /dev/nbd1 00:05:22.218 22:50:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.218 22:50:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.218 22:50:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:22.218 22:50:54 -- common/autotest_common.sh@857 -- # local i 00:05:22.218 22:50:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:22.219 22:50:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:22.219 22:50:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:22.219 22:50:54 -- common/autotest_common.sh@861 -- # break 00:05:22.219 22:50:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:22.219 22:50:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:22.219 22:50:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.219 1+0 records in 00:05:22.219 1+0 records out 00:05:22.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268102 s, 15.3 MB/s 00:05:22.219 22:50:54 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.219 22:50:54 -- common/autotest_common.sh@874 -- # size=4096 00:05:22.219 22:50:54 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.219 22:50:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:22.219 22:50:54 -- common/autotest_common.sh@877 -- # return 0 00:05:22.219 22:50:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.219 22:50:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.219 22:50:54 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.219 22:50:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.219 22:50:54 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.477 22:50:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.477 { 00:05:22.477 "nbd_device": "/dev/nbd0", 00:05:22.477 "bdev_name": "Malloc0" 00:05:22.477 }, 00:05:22.477 { 00:05:22.477 "nbd_device": "/dev/nbd1", 00:05:22.477 "bdev_name": "Malloc1" 00:05:22.477 } 00:05:22.477 ]' 00:05:22.477 22:50:54 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.477 { 00:05:22.477 "nbd_device": "/dev/nbd0", 00:05:22.477 "bdev_name": "Malloc0" 00:05:22.477 }, 00:05:22.477 { 00:05:22.477 "nbd_device": "/dev/nbd1", 00:05:22.477 "bdev_name": "Malloc1" 00:05:22.477 } 00:05:22.477 ]' 00:05:22.477 22:50:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.477 22:50:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.477 /dev/nbd1' 00:05:22.477 22:50:54 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.477 /dev/nbd1' 00:05:22.477 22:50:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.477 22:50:54 -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.477 22:50:54 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.477 22:50:54 -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.477 22:50:54 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.477 22:50:54 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.477 22:50:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.477 22:50:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.477 22:50:54 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.477 22:50:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.477 22:50:54 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.477 22:50:54 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.477 256+0 records in 00:05:22.477 256+0 records out 00:05:22.477 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113433 s, 92.4 MB/s 00:05:22.477 22:50:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.477 22:50:54 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.477 256+0 records in 00:05:22.477 256+0 records out 00:05:22.477 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190912 s, 54.9 MB/s 00:05:22.477 22:50:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.477 22:50:54 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.736 256+0 records in 00:05:22.736 256+0 records out 00:05:22.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212048 s, 49.4 MB/s 00:05:22.736 22:50:54 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.736 22:50:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.736 22:50:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.736 22:50:54 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.736 22:50:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.736 22:50:54 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.736 22:50:54 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.736 22:50:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.736 22:50:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.736 22:50:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.736 22:50:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.736 22:50:54 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.736 22:50:54 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.736 22:50:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.736 22:50:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.736 22:50:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.736 22:50:54 -- bdev/nbd_common.sh@51 -- # local i 00:05:22.736 22:50:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.736 22:50:54 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.736 22:50:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.736 22:50:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.736 22:50:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.736 22:50:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.736 22:50:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.736 22:50:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.736 22:50:55 -- bdev/nbd_common.sh@41 -- # break 00:05:22.736 22:50:55 -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.736 22:50:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.736 22:50:55 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.997 22:50:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.997 22:50:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.997 22:50:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.997 22:50:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.997 22:50:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.997 22:50:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.997 22:50:55 -- bdev/nbd_common.sh@41 -- # break 00:05:22.997 22:50:55 -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.997 22:50:55 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.997 22:50:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.997 22:50:55 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.258 22:50:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.258 22:50:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.258 22:50:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.258 22:50:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.258 22:50:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.258 22:50:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.258 22:50:55 -- bdev/nbd_common.sh@65 -- # true 00:05:23.258 22:50:55 -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.258 22:50:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.258 22:50:55 -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.258 22:50:55 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.258 22:50:55 -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.258 22:50:55 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.517 22:50:55 -- event/event.sh@35 -- # sleep 3 00:05:23.517 [2024-07-24 22:50:55.883351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.517 [2024-07-24 22:50:55.915531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.517 [2024-07-24 22:50:55.915535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.775 [2024-07-24 22:50:55.956021] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:23.775 [2024-07-24 22:50:55.956062] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:26.309 22:50:58 -- event/event.sh@23 -- # for i in {0..2} 00:05:26.309 22:50:58 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:26.309 spdk_app_start Round 1 00:05:26.309 22:50:58 -- event/event.sh@25 -- # waitforlisten 3027897 /var/tmp/spdk-nbd.sock 00:05:26.309 22:50:58 -- common/autotest_common.sh@819 -- # '[' -z 3027897 ']' 00:05:26.309 22:50:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.309 22:50:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:26.309 22:50:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.309 22:50:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:26.309 22:50:58 -- common/autotest_common.sh@10 -- # set +x 00:05:26.568 22:50:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:26.568 22:50:58 -- common/autotest_common.sh@852 -- # return 0 00:05:26.568 22:50:58 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.827 Malloc0 00:05:26.827 22:50:59 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.827 Malloc1 00:05:26.827 22:50:59 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.827 22:50:59 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.827 22:50:59 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.827 22:50:59 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.827 22:50:59 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.827 22:50:59 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.827 22:50:59 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.827 22:50:59 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.827 22:50:59 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.827 22:50:59 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.827 22:50:59 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.827 22:50:59 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.827 22:50:59 -- bdev/nbd_common.sh@12 -- # local i 00:05:26.827 22:50:59 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.827 22:50:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.827 22:50:59 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.087 /dev/nbd0 00:05:27.087 22:50:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.087 22:50:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.087 22:50:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:27.087 22:50:59 -- common/autotest_common.sh@857 -- # local i 00:05:27.087 22:50:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:27.087 22:50:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:27.087 22:50:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:27.087 22:50:59 -- common/autotest_common.sh@861 -- # break 00:05:27.087 22:50:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:27.087 22:50:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:27.087 22:50:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.087 1+0 records in 00:05:27.087 1+0 records out 00:05:27.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255169 s, 16.1 MB/s 00:05:27.087 22:50:59 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.087 22:50:59 -- common/autotest_common.sh@874 -- # size=4096 00:05:27.087 22:50:59 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.087 22:50:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:27.087 22:50:59 -- common/autotest_common.sh@877 -- # return 0 00:05:27.087 22:50:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.087 22:50:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.087 22:50:59 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.347 /dev/nbd1 00:05:27.347 22:50:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.347 22:50:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.347 22:50:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:27.347 22:50:59 -- common/autotest_common.sh@857 -- # local i 00:05:27.347 22:50:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:27.347 22:50:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:27.347 22:50:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:27.347 22:50:59 -- common/autotest_common.sh@861 -- # break 00:05:27.347 22:50:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:27.347 22:50:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:27.347 22:50:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.347 1+0 records in 00:05:27.347 1+0 records out 00:05:27.347 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223106 s, 18.4 MB/s 00:05:27.347 22:50:59 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.347 22:50:59 -- common/autotest_common.sh@874 -- # size=4096 00:05:27.347 22:50:59 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.347 22:50:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:27.347 22:50:59 -- common/autotest_common.sh@877 -- # return 0 00:05:27.347 22:50:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.347 22:50:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.347 22:50:59 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.347 22:50:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.347 22:50:59 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.639 { 00:05:27.639 "nbd_device": "/dev/nbd0", 00:05:27.639 "bdev_name": "Malloc0" 00:05:27.639 }, 00:05:27.639 { 00:05:27.639 "nbd_device": "/dev/nbd1", 00:05:27.639 "bdev_name": "Malloc1" 00:05:27.639 } 00:05:27.639 ]' 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.639 { 00:05:27.639 "nbd_device": "/dev/nbd0", 00:05:27.639 "bdev_name": "Malloc0" 00:05:27.639 }, 00:05:27.639 { 00:05:27.639 "nbd_device": "/dev/nbd1", 00:05:27.639 "bdev_name": "Malloc1" 00:05:27.639 } 00:05:27.639 ]' 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.639 /dev/nbd1' 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.639 /dev/nbd1' 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.639 256+0 records in 00:05:27.639 256+0 records out 00:05:27.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114116 s, 91.9 MB/s 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.639 256+0 records in 00:05:27.639 256+0 records out 00:05:27.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198022 s, 53.0 MB/s 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.639 256+0 records in 00:05:27.639 256+0 records out 00:05:27.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208304 s, 50.3 MB/s 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@51 -- # local i 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.639 22:50:59 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:27.899 22:51:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:27.899 22:51:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:27.899 22:51:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:27.899 22:51:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.899 22:51:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.899 22:51:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:27.899 22:51:00 -- bdev/nbd_common.sh@41 -- # break 00:05:27.899 22:51:00 -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.899 22:51:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.899 22:51:00 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:27.899 22:51:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:27.899 22:51:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:27.899 22:51:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:27.899 22:51:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.899 22:51:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.899 22:51:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:27.899 22:51:00 -- bdev/nbd_common.sh@41 -- # break 00:05:27.899 22:51:00 -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.899 22:51:00 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.899 22:51:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.899 22:51:00 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.157 22:51:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.157 22:51:00 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.157 22:51:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.157 22:51:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.157 22:51:00 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.157 22:51:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.157 22:51:00 -- bdev/nbd_common.sh@65 -- # true 00:05:28.157 22:51:00 -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.157 22:51:00 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.157 22:51:00 -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.157 22:51:00 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.157 22:51:00 -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.157 22:51:00 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:28.416 22:51:00 -- event/event.sh@35 -- # sleep 3 00:05:28.675 [2024-07-24 22:51:00.880312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.676 [2024-07-24 22:51:00.914520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.676 [2024-07-24 22:51:00.914523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.676 [2024-07-24 22:51:00.956406] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.676 [2024-07-24 22:51:00.956444] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:31.966 22:51:03 -- event/event.sh@23 -- # for i in {0..2} 00:05:31.966 22:51:03 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:31.966 spdk_app_start Round 2 00:05:31.966 22:51:03 -- event/event.sh@25 -- # waitforlisten 3027897 /var/tmp/spdk-nbd.sock 00:05:31.966 22:51:03 -- common/autotest_common.sh@819 -- # '[' -z 3027897 ']' 00:05:31.966 22:51:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.966 22:51:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:31.966 22:51:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.966 22:51:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:31.966 22:51:03 -- common/autotest_common.sh@10 -- # set +x 00:05:31.966 22:51:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:31.966 22:51:03 -- common/autotest_common.sh@852 -- # return 0 00:05:31.966 22:51:03 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.966 Malloc0 00:05:31.966 22:51:04 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.966 Malloc1 00:05:31.966 22:51:04 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.966 22:51:04 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.966 22:51:04 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.966 22:51:04 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:31.966 22:51:04 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.966 22:51:04 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:31.966 22:51:04 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.966 22:51:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.966 22:51:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.966 22:51:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:31.966 22:51:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.966 22:51:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:31.966 22:51:04 -- bdev/nbd_common.sh@12 -- # local i 00:05:31.966 22:51:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:31.966 22:51:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.966 22:51:04 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.225 /dev/nbd0 00:05:32.225 22:51:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.225 22:51:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.225 22:51:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:32.225 22:51:04 -- common/autotest_common.sh@857 -- # local i 00:05:32.225 22:51:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:32.225 22:51:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:32.225 22:51:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:32.225 22:51:04 -- common/autotest_common.sh@861 -- # break 00:05:32.225 22:51:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:32.225 22:51:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:32.225 22:51:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.225 1+0 records in 00:05:32.225 1+0 records out 00:05:32.225 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229998 s, 17.8 MB/s 00:05:32.225 22:51:04 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.225 22:51:04 -- common/autotest_common.sh@874 -- # size=4096 00:05:32.225 22:51:04 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.225 22:51:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:32.225 22:51:04 -- common/autotest_common.sh@877 -- # return 0 00:05:32.225 22:51:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.226 22:51:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.226 22:51:04 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:32.226 /dev/nbd1 00:05:32.226 22:51:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:32.226 22:51:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:32.226 22:51:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:32.226 22:51:04 -- common/autotest_common.sh@857 -- # local i 00:05:32.226 22:51:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:32.226 22:51:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:32.226 22:51:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:32.226 22:51:04 -- common/autotest_common.sh@861 -- # break 00:05:32.226 22:51:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:32.226 22:51:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:32.226 22:51:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.226 1+0 records in 00:05:32.226 1+0 records out 00:05:32.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196534 s, 20.8 MB/s 00:05:32.226 22:51:04 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.226 22:51:04 -- common/autotest_common.sh@874 -- # size=4096 00:05:32.226 22:51:04 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.226 22:51:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:32.226 22:51:04 -- common/autotest_common.sh@877 -- # return 0 00:05:32.226 22:51:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.226 22:51:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.226 22:51:04 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.226 22:51:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.485 22:51:04 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.485 22:51:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:32.485 { 00:05:32.485 "nbd_device": "/dev/nbd0", 00:05:32.485 "bdev_name": "Malloc0" 00:05:32.485 }, 00:05:32.485 { 00:05:32.485 "nbd_device": "/dev/nbd1", 00:05:32.485 "bdev_name": "Malloc1" 00:05:32.485 } 00:05:32.485 ]' 00:05:32.485 22:51:04 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:32.485 { 00:05:32.485 "nbd_device": "/dev/nbd0", 00:05:32.485 "bdev_name": "Malloc0" 00:05:32.485 }, 00:05:32.485 { 00:05:32.485 "nbd_device": "/dev/nbd1", 00:05:32.485 "bdev_name": "Malloc1" 00:05:32.485 } 00:05:32.485 ]' 00:05:32.485 22:51:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.485 22:51:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:32.485 /dev/nbd1' 00:05:32.485 22:51:04 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:32.485 /dev/nbd1' 00:05:32.485 22:51:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.485 22:51:04 -- bdev/nbd_common.sh@65 -- # count=2 00:05:32.485 22:51:04 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:32.485 22:51:04 -- bdev/nbd_common.sh@95 -- # count=2 00:05:32.485 22:51:04 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:32.485 22:51:04 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:32.485 22:51:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.485 22:51:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.485 22:51:04 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:32.485 22:51:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.485 22:51:04 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:32.485 22:51:04 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:32.485 256+0 records in 00:05:32.485 256+0 records out 00:05:32.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107969 s, 97.1 MB/s 00:05:32.485 22:51:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.485 22:51:04 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:32.745 256+0 records in 00:05:32.745 256+0 records out 00:05:32.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197458 s, 53.1 MB/s 00:05:32.745 22:51:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.745 22:51:04 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:32.745 256+0 records in 00:05:32.745 256+0 records out 00:05:32.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140867 s, 74.4 MB/s 00:05:32.745 22:51:04 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:32.745 22:51:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.745 22:51:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.745 22:51:04 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:32.745 22:51:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.745 22:51:04 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:32.745 22:51:04 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:32.745 22:51:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.745 22:51:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:32.745 22:51:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.745 22:51:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:32.745 22:51:04 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.745 22:51:04 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:32.745 22:51:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.745 22:51:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.745 22:51:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:32.745 22:51:04 -- bdev/nbd_common.sh@51 -- # local i 00:05:32.745 22:51:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.745 22:51:04 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:32.745 22:51:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:32.745 22:51:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:32.745 22:51:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:32.745 22:51:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.745 22:51:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.745 22:51:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:32.745 22:51:05 -- bdev/nbd_common.sh@41 -- # break 00:05:32.745 22:51:05 -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.745 22:51:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.745 22:51:05 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:33.004 22:51:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:33.004 22:51:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:33.004 22:51:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:33.004 22:51:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.004 22:51:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.004 22:51:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:33.004 22:51:05 -- bdev/nbd_common.sh@41 -- # break 00:05:33.004 22:51:05 -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.004 22:51:05 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.004 22:51:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.004 22:51:05 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.263 22:51:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:33.263 22:51:05 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:33.263 22:51:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.263 22:51:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:33.263 22:51:05 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:33.263 22:51:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.263 22:51:05 -- bdev/nbd_common.sh@65 -- # true 00:05:33.263 22:51:05 -- bdev/nbd_common.sh@65 -- # count=0 00:05:33.263 22:51:05 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:33.263 22:51:05 -- bdev/nbd_common.sh@104 -- # count=0 00:05:33.263 22:51:05 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:33.263 22:51:05 -- bdev/nbd_common.sh@109 -- # return 0 00:05:33.263 22:51:05 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:33.521 22:51:05 -- event/event.sh@35 -- # sleep 3 00:05:33.522 [2024-07-24 22:51:05.903786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.522 [2024-07-24 22:51:05.937845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.522 [2024-07-24 22:51:05.937849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.781 [2024-07-24 22:51:05.978454] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:33.781 [2024-07-24 22:51:05.978497] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:36.316 22:51:08 -- event/event.sh@38 -- # waitforlisten 3027897 /var/tmp/spdk-nbd.sock 00:05:36.316 22:51:08 -- common/autotest_common.sh@819 -- # '[' -z 3027897 ']' 00:05:36.316 22:51:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.316 22:51:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:36.316 22:51:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.316 22:51:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:36.316 22:51:08 -- common/autotest_common.sh@10 -- # set +x 00:05:36.576 22:51:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:36.576 22:51:08 -- common/autotest_common.sh@852 -- # return 0 00:05:36.576 22:51:08 -- event/event.sh@39 -- # killprocess 3027897 00:05:36.576 22:51:08 -- common/autotest_common.sh@926 -- # '[' -z 3027897 ']' 00:05:36.576 22:51:08 -- common/autotest_common.sh@930 -- # kill -0 3027897 00:05:36.576 22:51:08 -- common/autotest_common.sh@931 -- # uname 00:05:36.576 22:51:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:36.576 22:51:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3027897 00:05:36.576 22:51:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:36.576 22:51:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:36.576 22:51:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3027897' 00:05:36.576 killing process with pid 3027897 00:05:36.576 22:51:08 -- common/autotest_common.sh@945 -- # kill 3027897 00:05:36.576 22:51:08 -- common/autotest_common.sh@950 -- # wait 3027897 00:05:36.835 spdk_app_start is called in Round 0. 00:05:36.835 Shutdown signal received, stop current app iteration 00:05:36.835 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:05:36.835 spdk_app_start is called in Round 1. 00:05:36.835 Shutdown signal received, stop current app iteration 00:05:36.835 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:05:36.835 spdk_app_start is called in Round 2. 00:05:36.835 Shutdown signal received, stop current app iteration 00:05:36.835 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:05:36.835 spdk_app_start is called in Round 3. 00:05:36.835 Shutdown signal received, stop current app iteration 00:05:36.835 22:51:09 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:36.835 22:51:09 -- event/event.sh@42 -- # return 0 00:05:36.835 00:05:36.835 real 0m16.091s 00:05:36.835 user 0m34.280s 00:05:36.835 sys 0m2.942s 00:05:36.836 22:51:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.836 22:51:09 -- common/autotest_common.sh@10 -- # set +x 00:05:36.836 ************************************ 00:05:36.836 END TEST app_repeat 00:05:36.836 ************************************ 00:05:36.836 22:51:09 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:36.836 22:51:09 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:36.836 22:51:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.836 22:51:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.836 22:51:09 -- common/autotest_common.sh@10 -- # set +x 00:05:36.836 ************************************ 00:05:36.836 START TEST cpu_locks 00:05:36.836 ************************************ 00:05:36.836 22:51:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:36.836 * Looking for test storage... 00:05:36.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:36.836 22:51:09 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:36.836 22:51:09 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:36.836 22:51:09 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:36.836 22:51:09 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:36.836 22:51:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.836 22:51:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.836 22:51:09 -- common/autotest_common.sh@10 -- # set +x 00:05:36.836 ************************************ 00:05:36.836 START TEST default_locks 00:05:36.836 ************************************ 00:05:36.836 22:51:09 -- common/autotest_common.sh@1104 -- # default_locks 00:05:37.095 22:51:09 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3031614 00:05:37.096 22:51:09 -- event/cpu_locks.sh@47 -- # waitforlisten 3031614 00:05:37.096 22:51:09 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.096 22:51:09 -- common/autotest_common.sh@819 -- # '[' -z 3031614 ']' 00:05:37.096 22:51:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.096 22:51:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:37.096 22:51:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.096 22:51:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:37.096 22:51:09 -- common/autotest_common.sh@10 -- # set +x 00:05:37.096 [2024-07-24 22:51:09.314131] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:37.096 [2024-07-24 22:51:09.314191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3031614 ] 00:05:37.096 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.096 [2024-07-24 22:51:09.385414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.096 [2024-07-24 22:51:09.421371] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:37.096 [2024-07-24 22:51:09.421485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.033 22:51:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:38.033 22:51:10 -- common/autotest_common.sh@852 -- # return 0 00:05:38.033 22:51:10 -- event/cpu_locks.sh@49 -- # locks_exist 3031614 00:05:38.033 22:51:10 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.033 22:51:10 -- event/cpu_locks.sh@22 -- # lslocks -p 3031614 00:05:38.292 lslocks: write error 00:05:38.292 22:51:10 -- event/cpu_locks.sh@50 -- # killprocess 3031614 00:05:38.292 22:51:10 -- common/autotest_common.sh@926 -- # '[' -z 3031614 ']' 00:05:38.292 22:51:10 -- common/autotest_common.sh@930 -- # kill -0 3031614 00:05:38.292 22:51:10 -- common/autotest_common.sh@931 -- # uname 00:05:38.292 22:51:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:38.292 22:51:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3031614 00:05:38.551 22:51:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:38.551 22:51:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:38.551 22:51:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3031614' 00:05:38.551 killing process with pid 3031614 00:05:38.551 22:51:10 -- common/autotest_common.sh@945 -- # kill 3031614 00:05:38.551 22:51:10 -- common/autotest_common.sh@950 -- # wait 3031614 00:05:38.810 22:51:11 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3031614 00:05:38.810 22:51:11 -- common/autotest_common.sh@640 -- # local es=0 00:05:38.810 22:51:11 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3031614 00:05:38.810 22:51:11 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:38.810 22:51:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:38.810 22:51:11 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:38.810 22:51:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:38.810 22:51:11 -- common/autotest_common.sh@643 -- # waitforlisten 3031614 00:05:38.810 22:51:11 -- common/autotest_common.sh@819 -- # '[' -z 3031614 ']' 00:05:38.810 22:51:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.810 22:51:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:38.810 22:51:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.810 22:51:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:38.810 22:51:11 -- common/autotest_common.sh@10 -- # set +x 00:05:38.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3031614) - No such process 00:05:38.810 ERROR: process (pid: 3031614) is no longer running 00:05:38.810 22:51:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:38.810 22:51:11 -- common/autotest_common.sh@852 -- # return 1 00:05:38.810 22:51:11 -- common/autotest_common.sh@643 -- # es=1 00:05:38.810 22:51:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:38.810 22:51:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:38.810 22:51:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:38.810 22:51:11 -- event/cpu_locks.sh@54 -- # no_locks 00:05:38.810 22:51:11 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:38.810 22:51:11 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:38.810 22:51:11 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:38.810 00:05:38.810 real 0m1.781s 00:05:38.810 user 0m1.852s 00:05:38.810 sys 0m0.644s 00:05:38.810 22:51:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.810 22:51:11 -- common/autotest_common.sh@10 -- # set +x 00:05:38.810 ************************************ 00:05:38.810 END TEST default_locks 00:05:38.810 ************************************ 00:05:38.810 22:51:11 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:38.810 22:51:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.810 22:51:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.810 22:51:11 -- common/autotest_common.sh@10 -- # set +x 00:05:38.810 ************************************ 00:05:38.810 START TEST default_locks_via_rpc 00:05:38.810 ************************************ 00:05:38.810 22:51:11 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:05:38.810 22:51:11 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3031917 00:05:38.810 22:51:11 -- event/cpu_locks.sh@63 -- # waitforlisten 3031917 00:05:38.810 22:51:11 -- common/autotest_common.sh@819 -- # '[' -z 3031917 ']' 00:05:38.810 22:51:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.810 22:51:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:38.810 22:51:11 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.810 22:51:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.810 22:51:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:38.810 22:51:11 -- common/autotest_common.sh@10 -- # set +x 00:05:38.810 [2024-07-24 22:51:11.130141] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:38.810 [2024-07-24 22:51:11.130195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3031917 ] 00:05:38.810 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.810 [2024-07-24 22:51:11.200358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.810 [2024-07-24 22:51:11.237215] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:38.810 [2024-07-24 22:51:11.237337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.747 22:51:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:39.747 22:51:11 -- common/autotest_common.sh@852 -- # return 0 00:05:39.747 22:51:11 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:39.747 22:51:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:39.747 22:51:11 -- common/autotest_common.sh@10 -- # set +x 00:05:39.747 22:51:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:39.747 22:51:11 -- event/cpu_locks.sh@67 -- # no_locks 00:05:39.747 22:51:11 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:39.747 22:51:11 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:39.747 22:51:11 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:39.747 22:51:11 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:39.747 22:51:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:39.747 22:51:11 -- common/autotest_common.sh@10 -- # set +x 00:05:39.747 22:51:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:39.747 22:51:11 -- event/cpu_locks.sh@71 -- # locks_exist 3031917 00:05:39.747 22:51:11 -- event/cpu_locks.sh@22 -- # lslocks -p 3031917 00:05:39.747 22:51:11 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.747 22:51:12 -- event/cpu_locks.sh@73 -- # killprocess 3031917 00:05:39.747 22:51:12 -- common/autotest_common.sh@926 -- # '[' -z 3031917 ']' 00:05:39.747 22:51:12 -- common/autotest_common.sh@930 -- # kill -0 3031917 00:05:39.747 22:51:12 -- common/autotest_common.sh@931 -- # uname 00:05:39.747 22:51:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:39.747 22:51:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3031917 00:05:40.006 22:51:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:40.006 22:51:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:40.006 22:51:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3031917' 00:05:40.006 killing process with pid 3031917 00:05:40.006 22:51:12 -- common/autotest_common.sh@945 -- # kill 3031917 00:05:40.006 22:51:12 -- common/autotest_common.sh@950 -- # wait 3031917 00:05:40.266 00:05:40.266 real 0m1.388s 00:05:40.266 user 0m1.435s 00:05:40.266 sys 0m0.483s 00:05:40.266 22:51:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.266 22:51:12 -- common/autotest_common.sh@10 -- # set +x 00:05:40.266 ************************************ 00:05:40.266 END TEST default_locks_via_rpc 00:05:40.266 ************************************ 00:05:40.266 22:51:12 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:40.266 22:51:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.266 22:51:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.266 22:51:12 -- common/autotest_common.sh@10 -- # set +x 00:05:40.266 ************************************ 00:05:40.266 START TEST non_locking_app_on_locked_coremask 00:05:40.266 ************************************ 00:05:40.266 22:51:12 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:05:40.266 22:51:12 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3032213 00:05:40.266 22:51:12 -- event/cpu_locks.sh@81 -- # waitforlisten 3032213 /var/tmp/spdk.sock 00:05:40.266 22:51:12 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.266 22:51:12 -- common/autotest_common.sh@819 -- # '[' -z 3032213 ']' 00:05:40.266 22:51:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.266 22:51:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:40.266 22:51:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.266 22:51:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:40.266 22:51:12 -- common/autotest_common.sh@10 -- # set +x 00:05:40.266 [2024-07-24 22:51:12.574959] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:40.266 [2024-07-24 22:51:12.575016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3032213 ] 00:05:40.266 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.266 [2024-07-24 22:51:12.647648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.266 [2024-07-24 22:51:12.680958] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:40.266 [2024-07-24 22:51:12.681083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.202 22:51:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:41.202 22:51:13 -- common/autotest_common.sh@852 -- # return 0 00:05:41.202 22:51:13 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3032340 00:05:41.202 22:51:13 -- event/cpu_locks.sh@85 -- # waitforlisten 3032340 /var/tmp/spdk2.sock 00:05:41.202 22:51:13 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:41.202 22:51:13 -- common/autotest_common.sh@819 -- # '[' -z 3032340 ']' 00:05:41.202 22:51:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.202 22:51:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:41.202 22:51:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.202 22:51:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:41.202 22:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:41.202 [2024-07-24 22:51:13.405430] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:41.202 [2024-07-24 22:51:13.405485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3032340 ] 00:05:41.202 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.202 [2024-07-24 22:51:13.505340] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.202 [2024-07-24 22:51:13.505373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.202 [2024-07-24 22:51:13.581730] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:41.202 [2024-07-24 22:51:13.581850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.769 22:51:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:41.769 22:51:14 -- common/autotest_common.sh@852 -- # return 0 00:05:41.769 22:51:14 -- event/cpu_locks.sh@87 -- # locks_exist 3032213 00:05:41.769 22:51:14 -- event/cpu_locks.sh@22 -- # lslocks -p 3032213 00:05:41.769 22:51:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.336 lslocks: write error 00:05:42.336 22:51:14 -- event/cpu_locks.sh@89 -- # killprocess 3032213 00:05:42.336 22:51:14 -- common/autotest_common.sh@926 -- # '[' -z 3032213 ']' 00:05:42.336 22:51:14 -- common/autotest_common.sh@930 -- # kill -0 3032213 00:05:42.336 22:51:14 -- common/autotest_common.sh@931 -- # uname 00:05:42.336 22:51:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:42.336 22:51:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3032213 00:05:42.336 22:51:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:42.336 22:51:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:42.337 22:51:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3032213' 00:05:42.337 killing process with pid 3032213 00:05:42.337 22:51:14 -- common/autotest_common.sh@945 -- # kill 3032213 00:05:42.337 22:51:14 -- common/autotest_common.sh@950 -- # wait 3032213 00:05:42.904 22:51:15 -- event/cpu_locks.sh@90 -- # killprocess 3032340 00:05:42.904 22:51:15 -- common/autotest_common.sh@926 -- # '[' -z 3032340 ']' 00:05:42.904 22:51:15 -- common/autotest_common.sh@930 -- # kill -0 3032340 00:05:42.904 22:51:15 -- common/autotest_common.sh@931 -- # uname 00:05:42.904 22:51:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:42.904 22:51:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3032340 00:05:42.905 22:51:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:42.905 22:51:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:42.905 22:51:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3032340' 00:05:42.905 killing process with pid 3032340 00:05:42.905 22:51:15 -- common/autotest_common.sh@945 -- # kill 3032340 00:05:42.905 22:51:15 -- common/autotest_common.sh@950 -- # wait 3032340 00:05:43.163 00:05:43.163 real 0m3.069s 00:05:43.163 user 0m3.228s 00:05:43.163 sys 0m0.965s 00:05:43.164 22:51:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.164 22:51:15 -- common/autotest_common.sh@10 -- # set +x 00:05:43.164 ************************************ 00:05:43.164 END TEST non_locking_app_on_locked_coremask 00:05:43.164 ************************************ 00:05:43.423 22:51:15 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:43.423 22:51:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.423 22:51:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.423 22:51:15 -- common/autotest_common.sh@10 -- # set +x 00:05:43.423 ************************************ 00:05:43.423 START TEST locking_app_on_unlocked_coremask 00:05:43.424 ************************************ 00:05:43.424 22:51:15 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:05:43.424 22:51:15 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3032785 00:05:43.424 22:51:15 -- event/cpu_locks.sh@99 -- # waitforlisten 3032785 /var/tmp/spdk.sock 00:05:43.424 22:51:15 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:43.424 22:51:15 -- common/autotest_common.sh@819 -- # '[' -z 3032785 ']' 00:05:43.424 22:51:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.424 22:51:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:43.424 22:51:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.424 22:51:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:43.424 22:51:15 -- common/autotest_common.sh@10 -- # set +x 00:05:43.424 [2024-07-24 22:51:15.691569] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:43.424 [2024-07-24 22:51:15.691624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3032785 ] 00:05:43.424 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.424 [2024-07-24 22:51:15.759745] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.424 [2024-07-24 22:51:15.759773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.424 [2024-07-24 22:51:15.792459] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.424 [2024-07-24 22:51:15.792581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.364 22:51:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:44.364 22:51:16 -- common/autotest_common.sh@852 -- # return 0 00:05:44.364 22:51:16 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3032948 00:05:44.364 22:51:16 -- event/cpu_locks.sh@103 -- # waitforlisten 3032948 /var/tmp/spdk2.sock 00:05:44.364 22:51:16 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:44.364 22:51:16 -- common/autotest_common.sh@819 -- # '[' -z 3032948 ']' 00:05:44.364 22:51:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.364 22:51:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:44.364 22:51:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.364 22:51:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:44.364 22:51:16 -- common/autotest_common.sh@10 -- # set +x 00:05:44.364 [2024-07-24 22:51:16.518072] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:44.364 [2024-07-24 22:51:16.518129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3032948 ] 00:05:44.364 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.364 [2024-07-24 22:51:16.621782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.364 [2024-07-24 22:51:16.693854] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.364 [2024-07-24 22:51:16.693971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.969 22:51:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:44.969 22:51:17 -- common/autotest_common.sh@852 -- # return 0 00:05:44.969 22:51:17 -- event/cpu_locks.sh@105 -- # locks_exist 3032948 00:05:44.969 22:51:17 -- event/cpu_locks.sh@22 -- # lslocks -p 3032948 00:05:44.969 22:51:17 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.537 lslocks: write error 00:05:45.537 22:51:17 -- event/cpu_locks.sh@107 -- # killprocess 3032785 00:05:45.537 22:51:17 -- common/autotest_common.sh@926 -- # '[' -z 3032785 ']' 00:05:45.537 22:51:17 -- common/autotest_common.sh@930 -- # kill -0 3032785 00:05:45.537 22:51:17 -- common/autotest_common.sh@931 -- # uname 00:05:45.537 22:51:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:45.537 22:51:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3032785 00:05:45.537 22:51:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:45.537 22:51:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:45.537 22:51:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3032785' 00:05:45.537 killing process with pid 3032785 00:05:45.537 22:51:17 -- common/autotest_common.sh@945 -- # kill 3032785 00:05:45.537 22:51:17 -- common/autotest_common.sh@950 -- # wait 3032785 00:05:46.106 22:51:18 -- event/cpu_locks.sh@108 -- # killprocess 3032948 00:05:46.106 22:51:18 -- common/autotest_common.sh@926 -- # '[' -z 3032948 ']' 00:05:46.106 22:51:18 -- common/autotest_common.sh@930 -- # kill -0 3032948 00:05:46.106 22:51:18 -- common/autotest_common.sh@931 -- # uname 00:05:46.106 22:51:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:46.106 22:51:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3032948 00:05:46.106 22:51:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:46.106 22:51:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:46.106 22:51:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3032948' 00:05:46.106 killing process with pid 3032948 00:05:46.106 22:51:18 -- common/autotest_common.sh@945 -- # kill 3032948 00:05:46.106 22:51:18 -- common/autotest_common.sh@950 -- # wait 3032948 00:05:46.365 00:05:46.365 real 0m3.052s 00:05:46.365 user 0m3.204s 00:05:46.365 sys 0m0.954s 00:05:46.365 22:51:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.365 22:51:18 -- common/autotest_common.sh@10 -- # set +x 00:05:46.365 ************************************ 00:05:46.365 END TEST locking_app_on_unlocked_coremask 00:05:46.365 ************************************ 00:05:46.365 22:51:18 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:46.365 22:51:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.365 22:51:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.365 22:51:18 -- common/autotest_common.sh@10 -- # set +x 00:05:46.365 ************************************ 00:05:46.365 START TEST locking_app_on_locked_coremask 00:05:46.365 ************************************ 00:05:46.365 22:51:18 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:05:46.365 22:51:18 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3033367 00:05:46.365 22:51:18 -- event/cpu_locks.sh@116 -- # waitforlisten 3033367 /var/tmp/spdk.sock 00:05:46.365 22:51:18 -- common/autotest_common.sh@819 -- # '[' -z 3033367 ']' 00:05:46.365 22:51:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.365 22:51:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.365 22:51:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.365 22:51:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.365 22:51:18 -- common/autotest_common.sh@10 -- # set +x 00:05:46.365 22:51:18 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.365 [2024-07-24 22:51:18.780131] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:46.365 [2024-07-24 22:51:18.780187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3033367 ] 00:05:46.624 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.624 [2024-07-24 22:51:18.851676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.624 [2024-07-24 22:51:18.889821] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.624 [2024-07-24 22:51:18.889933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.192 22:51:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.192 22:51:19 -- common/autotest_common.sh@852 -- # return 0 00:05:47.192 22:51:19 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3033525 00:05:47.192 22:51:19 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3033525 /var/tmp/spdk2.sock 00:05:47.192 22:51:19 -- common/autotest_common.sh@640 -- # local es=0 00:05:47.192 22:51:19 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3033525 /var/tmp/spdk2.sock 00:05:47.192 22:51:19 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:47.192 22:51:19 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:47.192 22:51:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:47.192 22:51:19 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:47.192 22:51:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:47.192 22:51:19 -- common/autotest_common.sh@643 -- # waitforlisten 3033525 /var/tmp/spdk2.sock 00:05:47.192 22:51:19 -- common/autotest_common.sh@819 -- # '[' -z 3033525 ']' 00:05:47.192 22:51:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.192 22:51:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:47.192 22:51:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.192 22:51:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:47.192 22:51:19 -- common/autotest_common.sh@10 -- # set +x 00:05:47.192 [2024-07-24 22:51:19.599120] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:47.192 [2024-07-24 22:51:19.599174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3033525 ] 00:05:47.451 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.451 [2024-07-24 22:51:19.698313] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3033367 has claimed it. 00:05:47.451 [2024-07-24 22:51:19.698351] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:48.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3033525) - No such process 00:05:48.020 ERROR: process (pid: 3033525) is no longer running 00:05:48.020 22:51:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:48.020 22:51:20 -- common/autotest_common.sh@852 -- # return 1 00:05:48.020 22:51:20 -- common/autotest_common.sh@643 -- # es=1 00:05:48.020 22:51:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:48.020 22:51:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:48.020 22:51:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:48.020 22:51:20 -- event/cpu_locks.sh@122 -- # locks_exist 3033367 00:05:48.020 22:51:20 -- event/cpu_locks.sh@22 -- # lslocks -p 3033367 00:05:48.020 22:51:20 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.280 lslocks: write error 00:05:48.280 22:51:20 -- event/cpu_locks.sh@124 -- # killprocess 3033367 00:05:48.280 22:51:20 -- common/autotest_common.sh@926 -- # '[' -z 3033367 ']' 00:05:48.280 22:51:20 -- common/autotest_common.sh@930 -- # kill -0 3033367 00:05:48.280 22:51:20 -- common/autotest_common.sh@931 -- # uname 00:05:48.280 22:51:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:48.280 22:51:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3033367 00:05:48.280 22:51:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:48.280 22:51:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:48.280 22:51:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3033367' 00:05:48.280 killing process with pid 3033367 00:05:48.280 22:51:20 -- common/autotest_common.sh@945 -- # kill 3033367 00:05:48.280 22:51:20 -- common/autotest_common.sh@950 -- # wait 3033367 00:05:48.549 00:05:48.549 real 0m2.190s 00:05:48.549 user 0m2.380s 00:05:48.549 sys 0m0.639s 00:05:48.549 22:51:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.549 22:51:20 -- common/autotest_common.sh@10 -- # set +x 00:05:48.549 ************************************ 00:05:48.549 END TEST locking_app_on_locked_coremask 00:05:48.549 ************************************ 00:05:48.549 22:51:20 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:48.549 22:51:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.549 22:51:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.549 22:51:20 -- common/autotest_common.sh@10 -- # set +x 00:05:48.549 ************************************ 00:05:48.549 START TEST locking_overlapped_coremask 00:05:48.549 ************************************ 00:05:48.549 22:51:20 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:05:48.549 22:51:20 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3033769 00:05:48.549 22:51:20 -- event/cpu_locks.sh@133 -- # waitforlisten 3033769 /var/tmp/spdk.sock 00:05:48.549 22:51:20 -- common/autotest_common.sh@819 -- # '[' -z 3033769 ']' 00:05:48.549 22:51:20 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:48.549 22:51:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.549 22:51:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:48.549 22:51:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.549 22:51:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:48.549 22:51:20 -- common/autotest_common.sh@10 -- # set +x 00:05:48.815 [2024-07-24 22:51:21.002002] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:48.815 [2024-07-24 22:51:21.002057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3033769 ] 00:05:48.815 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.815 [2024-07-24 22:51:21.073086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:48.815 [2024-07-24 22:51:21.110779] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:48.815 [2024-07-24 22:51:21.110963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.815 [2024-07-24 22:51:21.111039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.815 [2024-07-24 22:51:21.111040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.383 22:51:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:49.383 22:51:21 -- common/autotest_common.sh@852 -- # return 0 00:05:49.383 22:51:21 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3033940 00:05:49.383 22:51:21 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3033940 /var/tmp/spdk2.sock 00:05:49.383 22:51:21 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:49.383 22:51:21 -- common/autotest_common.sh@640 -- # local es=0 00:05:49.383 22:51:21 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3033940 /var/tmp/spdk2.sock 00:05:49.383 22:51:21 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:49.383 22:51:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:49.383 22:51:21 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:49.383 22:51:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:49.384 22:51:21 -- common/autotest_common.sh@643 -- # waitforlisten 3033940 /var/tmp/spdk2.sock 00:05:49.384 22:51:21 -- common/autotest_common.sh@819 -- # '[' -z 3033940 ']' 00:05:49.384 22:51:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.384 22:51:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:49.384 22:51:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.384 22:51:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:49.384 22:51:21 -- common/autotest_common.sh@10 -- # set +x 00:05:49.643 [2024-07-24 22:51:21.843628] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:49.643 [2024-07-24 22:51:21.843680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3033940 ] 00:05:49.643 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.643 [2024-07-24 22:51:21.942028] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3033769 has claimed it. 00:05:49.643 [2024-07-24 22:51:21.942068] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:50.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3033940) - No such process 00:05:50.212 ERROR: process (pid: 3033940) is no longer running 00:05:50.212 22:51:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:50.212 22:51:22 -- common/autotest_common.sh@852 -- # return 1 00:05:50.212 22:51:22 -- common/autotest_common.sh@643 -- # es=1 00:05:50.212 22:51:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:50.212 22:51:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:50.212 22:51:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:50.212 22:51:22 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:50.212 22:51:22 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:50.212 22:51:22 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:50.212 22:51:22 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:50.212 22:51:22 -- event/cpu_locks.sh@141 -- # killprocess 3033769 00:05:50.212 22:51:22 -- common/autotest_common.sh@926 -- # '[' -z 3033769 ']' 00:05:50.212 22:51:22 -- common/autotest_common.sh@930 -- # kill -0 3033769 00:05:50.212 22:51:22 -- common/autotest_common.sh@931 -- # uname 00:05:50.212 22:51:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:50.212 22:51:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3033769 00:05:50.212 22:51:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:50.212 22:51:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:50.212 22:51:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3033769' 00:05:50.212 killing process with pid 3033769 00:05:50.212 22:51:22 -- common/autotest_common.sh@945 -- # kill 3033769 00:05:50.212 22:51:22 -- common/autotest_common.sh@950 -- # wait 3033769 00:05:50.471 00:05:50.471 real 0m1.843s 00:05:50.471 user 0m5.228s 00:05:50.471 sys 0m0.454s 00:05:50.471 22:51:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.471 22:51:22 -- common/autotest_common.sh@10 -- # set +x 00:05:50.471 ************************************ 00:05:50.471 END TEST locking_overlapped_coremask 00:05:50.471 ************************************ 00:05:50.471 22:51:22 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:50.471 22:51:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:50.471 22:51:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.471 22:51:22 -- common/autotest_common.sh@10 -- # set +x 00:05:50.471 ************************************ 00:05:50.471 START TEST locking_overlapped_coremask_via_rpc 00:05:50.471 ************************************ 00:05:50.471 22:51:22 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:05:50.471 22:51:22 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3034233 00:05:50.471 22:51:22 -- event/cpu_locks.sh@149 -- # waitforlisten 3034233 /var/tmp/spdk.sock 00:05:50.471 22:51:22 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:50.471 22:51:22 -- common/autotest_common.sh@819 -- # '[' -z 3034233 ']' 00:05:50.471 22:51:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.471 22:51:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:50.471 22:51:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.471 22:51:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:50.471 22:51:22 -- common/autotest_common.sh@10 -- # set +x 00:05:50.731 [2024-07-24 22:51:22.911423] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:50.731 [2024-07-24 22:51:22.911477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3034233 ] 00:05:50.731 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.731 [2024-07-24 22:51:22.981845] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.731 [2024-07-24 22:51:22.981869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:50.731 [2024-07-24 22:51:23.020762] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:50.731 [2024-07-24 22:51:23.020911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.731 [2024-07-24 22:51:23.021010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.731 [2024-07-24 22:51:23.021011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.299 22:51:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:51.299 22:51:23 -- common/autotest_common.sh@852 -- # return 0 00:05:51.299 22:51:23 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3034247 00:05:51.299 22:51:23 -- event/cpu_locks.sh@153 -- # waitforlisten 3034247 /var/tmp/spdk2.sock 00:05:51.299 22:51:23 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:51.299 22:51:23 -- common/autotest_common.sh@819 -- # '[' -z 3034247 ']' 00:05:51.299 22:51:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.299 22:51:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:51.299 22:51:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.299 22:51:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:51.299 22:51:23 -- common/autotest_common.sh@10 -- # set +x 00:05:51.559 [2024-07-24 22:51:23.746451] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:51.559 [2024-07-24 22:51:23.746504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3034247 ] 00:05:51.559 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.559 [2024-07-24 22:51:23.845649] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.559 [2024-07-24 22:51:23.845676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:51.559 [2024-07-24 22:51:23.921791] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:51.559 [2024-07-24 22:51:23.922003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.559 [2024-07-24 22:51:23.925764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.559 [2024-07-24 22:51:23.925765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:52.127 22:51:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:52.127 22:51:24 -- common/autotest_common.sh@852 -- # return 0 00:05:52.127 22:51:24 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:52.127 22:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.127 22:51:24 -- common/autotest_common.sh@10 -- # set +x 00:05:52.127 22:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.127 22:51:24 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.127 22:51:24 -- common/autotest_common.sh@640 -- # local es=0 00:05:52.127 22:51:24 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.127 22:51:24 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:05:52.127 22:51:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:52.127 22:51:24 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:05:52.127 22:51:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:52.127 22:51:24 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.127 22:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.127 22:51:24 -- common/autotest_common.sh@10 -- # set +x 00:05:52.127 [2024-07-24 22:51:24.534782] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3034233 has claimed it. 00:05:52.127 request: 00:05:52.127 { 00:05:52.127 "method": "framework_enable_cpumask_locks", 00:05:52.127 "req_id": 1 00:05:52.127 } 00:05:52.127 Got JSON-RPC error response 00:05:52.127 response: 00:05:52.127 { 00:05:52.127 "code": -32603, 00:05:52.127 "message": "Failed to claim CPU core: 2" 00:05:52.127 } 00:05:52.127 22:51:24 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:05:52.127 22:51:24 -- common/autotest_common.sh@643 -- # es=1 00:05:52.127 22:51:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:52.127 22:51:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:52.127 22:51:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:52.127 22:51:24 -- event/cpu_locks.sh@158 -- # waitforlisten 3034233 /var/tmp/spdk.sock 00:05:52.127 22:51:24 -- common/autotest_common.sh@819 -- # '[' -z 3034233 ']' 00:05:52.127 22:51:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.127 22:51:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:52.127 22:51:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.127 22:51:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:52.127 22:51:24 -- common/autotest_common.sh@10 -- # set +x 00:05:52.387 22:51:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:52.387 22:51:24 -- common/autotest_common.sh@852 -- # return 0 00:05:52.387 22:51:24 -- event/cpu_locks.sh@159 -- # waitforlisten 3034247 /var/tmp/spdk2.sock 00:05:52.387 22:51:24 -- common/autotest_common.sh@819 -- # '[' -z 3034247 ']' 00:05:52.387 22:51:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.387 22:51:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:52.387 22:51:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.387 22:51:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:52.387 22:51:24 -- common/autotest_common.sh@10 -- # set +x 00:05:52.646 22:51:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:52.646 22:51:24 -- common/autotest_common.sh@852 -- # return 0 00:05:52.646 22:51:24 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:52.646 22:51:24 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:52.646 22:51:24 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:52.646 22:51:24 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:52.646 00:05:52.646 real 0m2.041s 00:05:52.646 user 0m0.769s 00:05:52.646 sys 0m0.199s 00:05:52.646 22:51:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.646 22:51:24 -- common/autotest_common.sh@10 -- # set +x 00:05:52.646 ************************************ 00:05:52.646 END TEST locking_overlapped_coremask_via_rpc 00:05:52.646 ************************************ 00:05:52.646 22:51:24 -- event/cpu_locks.sh@174 -- # cleanup 00:05:52.646 22:51:24 -- event/cpu_locks.sh@15 -- # [[ -z 3034233 ]] 00:05:52.646 22:51:24 -- event/cpu_locks.sh@15 -- # killprocess 3034233 00:05:52.646 22:51:24 -- common/autotest_common.sh@926 -- # '[' -z 3034233 ']' 00:05:52.646 22:51:24 -- common/autotest_common.sh@930 -- # kill -0 3034233 00:05:52.646 22:51:24 -- common/autotest_common.sh@931 -- # uname 00:05:52.646 22:51:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:52.646 22:51:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3034233 00:05:52.646 22:51:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:52.647 22:51:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:52.647 22:51:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3034233' 00:05:52.647 killing process with pid 3034233 00:05:52.647 22:51:24 -- common/autotest_common.sh@945 -- # kill 3034233 00:05:52.647 22:51:24 -- common/autotest_common.sh@950 -- # wait 3034233 00:05:52.906 22:51:25 -- event/cpu_locks.sh@16 -- # [[ -z 3034247 ]] 00:05:52.906 22:51:25 -- event/cpu_locks.sh@16 -- # killprocess 3034247 00:05:52.906 22:51:25 -- common/autotest_common.sh@926 -- # '[' -z 3034247 ']' 00:05:52.906 22:51:25 -- common/autotest_common.sh@930 -- # kill -0 3034247 00:05:52.906 22:51:25 -- common/autotest_common.sh@931 -- # uname 00:05:52.906 22:51:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:52.906 22:51:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3034247 00:05:53.165 22:51:25 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:53.165 22:51:25 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:53.165 22:51:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3034247' 00:05:53.165 killing process with pid 3034247 00:05:53.165 22:51:25 -- common/autotest_common.sh@945 -- # kill 3034247 00:05:53.165 22:51:25 -- common/autotest_common.sh@950 -- # wait 3034247 00:05:53.425 22:51:25 -- event/cpu_locks.sh@18 -- # rm -f 00:05:53.425 22:51:25 -- event/cpu_locks.sh@1 -- # cleanup 00:05:53.425 22:51:25 -- event/cpu_locks.sh@15 -- # [[ -z 3034233 ]] 00:05:53.425 22:51:25 -- event/cpu_locks.sh@15 -- # killprocess 3034233 00:05:53.425 22:51:25 -- common/autotest_common.sh@926 -- # '[' -z 3034233 ']' 00:05:53.425 22:51:25 -- common/autotest_common.sh@930 -- # kill -0 3034233 00:05:53.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3034233) - No such process 00:05:53.425 22:51:25 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3034233 is not found' 00:05:53.425 Process with pid 3034233 is not found 00:05:53.425 22:51:25 -- event/cpu_locks.sh@16 -- # [[ -z 3034247 ]] 00:05:53.425 22:51:25 -- event/cpu_locks.sh@16 -- # killprocess 3034247 00:05:53.425 22:51:25 -- common/autotest_common.sh@926 -- # '[' -z 3034247 ']' 00:05:53.425 22:51:25 -- common/autotest_common.sh@930 -- # kill -0 3034247 00:05:53.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3034247) - No such process 00:05:53.425 22:51:25 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3034247 is not found' 00:05:53.425 Process with pid 3034247 is not found 00:05:53.425 22:51:25 -- event/cpu_locks.sh@18 -- # rm -f 00:05:53.425 00:05:53.425 real 0m16.504s 00:05:53.425 user 0m28.428s 00:05:53.425 sys 0m5.215s 00:05:53.425 22:51:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.425 22:51:25 -- common/autotest_common.sh@10 -- # set +x 00:05:53.425 ************************************ 00:05:53.425 END TEST cpu_locks 00:05:53.425 ************************************ 00:05:53.425 00:05:53.425 real 0m41.614s 00:05:53.425 user 1m19.579s 00:05:53.425 sys 0m9.142s 00:05:53.425 22:51:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.425 22:51:25 -- common/autotest_common.sh@10 -- # set +x 00:05:53.425 ************************************ 00:05:53.425 END TEST event 00:05:53.425 ************************************ 00:05:53.425 22:51:25 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:53.425 22:51:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:53.425 22:51:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.425 22:51:25 -- common/autotest_common.sh@10 -- # set +x 00:05:53.425 ************************************ 00:05:53.425 START TEST thread 00:05:53.425 ************************************ 00:05:53.425 22:51:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:53.425 * Looking for test storage... 00:05:53.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:53.425 22:51:25 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:53.426 22:51:25 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:53.426 22:51:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.426 22:51:25 -- common/autotest_common.sh@10 -- # set +x 00:05:53.685 ************************************ 00:05:53.685 START TEST thread_poller_perf 00:05:53.685 ************************************ 00:05:53.685 22:51:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:53.685 [2024-07-24 22:51:25.882586] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:53.685 [2024-07-24 22:51:25.882679] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3034830 ] 00:05:53.685 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.685 [2024-07-24 22:51:25.955363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.685 [2024-07-24 22:51:25.991524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.685 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:54.621 ====================================== 00:05:54.621 busy:2506172974 (cyc) 00:05:54.621 total_run_count: 417000 00:05:54.621 tsc_hz: 2500000000 (cyc) 00:05:54.621 ====================================== 00:05:54.621 poller_cost: 6010 (cyc), 2404 (nsec) 00:05:54.621 00:05:54.621 real 0m1.192s 00:05:54.621 user 0m1.096s 00:05:54.621 sys 0m0.092s 00:05:54.622 22:51:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.881 22:51:27 -- common/autotest_common.sh@10 -- # set +x 00:05:54.881 ************************************ 00:05:54.881 END TEST thread_poller_perf 00:05:54.881 ************************************ 00:05:54.881 22:51:27 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:54.881 22:51:27 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:54.881 22:51:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.881 22:51:27 -- common/autotest_common.sh@10 -- # set +x 00:05:54.881 ************************************ 00:05:54.881 START TEST thread_poller_perf 00:05:54.881 ************************************ 00:05:54.881 22:51:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:54.881 [2024-07-24 22:51:27.124487] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:54.881 [2024-07-24 22:51:27.124578] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3034966 ] 00:05:54.881 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.881 [2024-07-24 22:51:27.197120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.881 [2024-07-24 22:51:27.231822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.881 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:56.261 ====================================== 00:05:56.261 busy:2502400192 (cyc) 00:05:56.261 total_run_count: 5666000 00:05:56.261 tsc_hz: 2500000000 (cyc) 00:05:56.261 ====================================== 00:05:56.261 poller_cost: 441 (cyc), 176 (nsec) 00:05:56.261 00:05:56.261 real 0m1.187s 00:05:56.261 user 0m1.105s 00:05:56.261 sys 0m0.079s 00:05:56.261 22:51:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.261 22:51:28 -- common/autotest_common.sh@10 -- # set +x 00:05:56.261 ************************************ 00:05:56.261 END TEST thread_poller_perf 00:05:56.261 ************************************ 00:05:56.261 22:51:28 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:56.261 00:05:56.261 real 0m2.581s 00:05:56.261 user 0m2.272s 00:05:56.261 sys 0m0.329s 00:05:56.261 22:51:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.261 22:51:28 -- common/autotest_common.sh@10 -- # set +x 00:05:56.261 ************************************ 00:05:56.261 END TEST thread 00:05:56.261 ************************************ 00:05:56.261 22:51:28 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:56.261 22:51:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:56.261 22:51:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.261 22:51:28 -- common/autotest_common.sh@10 -- # set +x 00:05:56.261 ************************************ 00:05:56.261 START TEST accel 00:05:56.261 ************************************ 00:05:56.261 22:51:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:56.261 * Looking for test storage... 00:05:56.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:56.261 22:51:28 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:56.261 22:51:28 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:56.261 22:51:28 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:56.262 22:51:28 -- accel/accel.sh@59 -- # spdk_tgt_pid=3035227 00:05:56.262 22:51:28 -- accel/accel.sh@60 -- # waitforlisten 3035227 00:05:56.262 22:51:28 -- common/autotest_common.sh@819 -- # '[' -z 3035227 ']' 00:05:56.262 22:51:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.262 22:51:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:56.262 22:51:28 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:56.262 22:51:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.262 22:51:28 -- accel/accel.sh@58 -- # build_accel_config 00:05:56.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.262 22:51:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:56.262 22:51:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.262 22:51:28 -- common/autotest_common.sh@10 -- # set +x 00:05:56.262 22:51:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.262 22:51:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.262 22:51:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.262 22:51:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.262 22:51:28 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.262 22:51:28 -- accel/accel.sh@42 -- # jq -r . 00:05:56.262 [2024-07-24 22:51:28.540369] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:56.262 [2024-07-24 22:51:28.540429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035227 ] 00:05:56.262 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.262 [2024-07-24 22:51:28.612419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.262 [2024-07-24 22:51:28.651502] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.262 [2024-07-24 22:51:28.651615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.200 22:51:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:57.200 22:51:29 -- common/autotest_common.sh@852 -- # return 0 00:05:57.200 22:51:29 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:57.200 22:51:29 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:57.200 22:51:29 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:57.200 22:51:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.200 22:51:29 -- common/autotest_common.sh@10 -- # set +x 00:05:57.200 22:51:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.200 22:51:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # IFS== 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.200 22:51:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:57.200 22:51:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # IFS== 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.200 22:51:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:57.200 22:51:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # IFS== 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.200 22:51:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:57.200 22:51:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # IFS== 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.200 22:51:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:57.200 22:51:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # IFS== 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.200 22:51:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:57.200 22:51:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # IFS== 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.200 22:51:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:57.200 22:51:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # IFS== 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.200 22:51:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:57.200 22:51:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # IFS== 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.200 22:51:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:57.200 22:51:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # IFS== 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.200 22:51:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:57.200 22:51:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # IFS== 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.200 22:51:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:57.200 22:51:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # IFS== 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.200 22:51:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:57.200 22:51:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # IFS== 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.200 22:51:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:57.200 22:51:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # IFS== 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.200 22:51:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:57.200 22:51:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # IFS== 00:05:57.200 22:51:29 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.200 22:51:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:57.200 22:51:29 -- accel/accel.sh@67 -- # killprocess 3035227 00:05:57.200 22:51:29 -- common/autotest_common.sh@926 -- # '[' -z 3035227 ']' 00:05:57.200 22:51:29 -- common/autotest_common.sh@930 -- # kill -0 3035227 00:05:57.200 22:51:29 -- common/autotest_common.sh@931 -- # uname 00:05:57.200 22:51:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:57.200 22:51:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3035227 00:05:57.200 22:51:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:57.200 22:51:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:57.200 22:51:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3035227' 00:05:57.201 killing process with pid 3035227 00:05:57.201 22:51:29 -- common/autotest_common.sh@945 -- # kill 3035227 00:05:57.201 22:51:29 -- common/autotest_common.sh@950 -- # wait 3035227 00:05:57.460 22:51:29 -- accel/accel.sh@68 -- # trap - ERR 00:05:57.460 22:51:29 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:57.460 22:51:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:57.460 22:51:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.460 22:51:29 -- common/autotest_common.sh@10 -- # set +x 00:05:57.460 22:51:29 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:05:57.460 22:51:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:57.460 22:51:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.460 22:51:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.460 22:51:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.460 22:51:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.460 22:51:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.460 22:51:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.460 22:51:29 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.460 22:51:29 -- accel/accel.sh@42 -- # jq -r . 00:05:57.460 22:51:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.460 22:51:29 -- common/autotest_common.sh@10 -- # set +x 00:05:57.460 22:51:29 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:57.460 22:51:29 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:57.460 22:51:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.460 22:51:29 -- common/autotest_common.sh@10 -- # set +x 00:05:57.460 ************************************ 00:05:57.460 START TEST accel_missing_filename 00:05:57.460 ************************************ 00:05:57.460 22:51:29 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:05:57.460 22:51:29 -- common/autotest_common.sh@640 -- # local es=0 00:05:57.460 22:51:29 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:57.460 22:51:29 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:57.460 22:51:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:57.460 22:51:29 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:57.460 22:51:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:57.460 22:51:29 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:05:57.461 22:51:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:57.461 22:51:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.461 22:51:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.461 22:51:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.461 22:51:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.461 22:51:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.461 22:51:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.461 22:51:29 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.461 22:51:29 -- accel/accel.sh@42 -- # jq -r . 00:05:57.461 [2024-07-24 22:51:29.847086] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:57.461 [2024-07-24 22:51:29.847185] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035519 ] 00:05:57.461 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.720 [2024-07-24 22:51:29.921601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.720 [2024-07-24 22:51:29.958856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.720 [2024-07-24 22:51:30.000227] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.720 [2024-07-24 22:51:30.061434] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:57.720 A filename is required. 00:05:57.720 22:51:30 -- common/autotest_common.sh@643 -- # es=234 00:05:57.720 22:51:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:57.720 22:51:30 -- common/autotest_common.sh@652 -- # es=106 00:05:57.720 22:51:30 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:57.720 22:51:30 -- common/autotest_common.sh@660 -- # es=1 00:05:57.720 22:51:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:57.720 00:05:57.720 real 0m0.309s 00:05:57.720 user 0m0.218s 00:05:57.720 sys 0m0.130s 00:05:57.720 22:51:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.720 22:51:30 -- common/autotest_common.sh@10 -- # set +x 00:05:57.720 ************************************ 00:05:57.720 END TEST accel_missing_filename 00:05:57.720 ************************************ 00:05:57.980 22:51:30 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:57.980 22:51:30 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:57.980 22:51:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.980 22:51:30 -- common/autotest_common.sh@10 -- # set +x 00:05:57.980 ************************************ 00:05:57.980 START TEST accel_compress_verify 00:05:57.980 ************************************ 00:05:57.980 22:51:30 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:57.980 22:51:30 -- common/autotest_common.sh@640 -- # local es=0 00:05:57.980 22:51:30 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:57.980 22:51:30 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:57.980 22:51:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:57.980 22:51:30 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:57.980 22:51:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:57.980 22:51:30 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:57.980 22:51:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:57.980 22:51:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.980 22:51:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.980 22:51:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.980 22:51:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.980 22:51:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.980 22:51:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.980 22:51:30 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.980 22:51:30 -- accel/accel.sh@42 -- # jq -r . 00:05:57.980 [2024-07-24 22:51:30.204233] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:57.980 [2024-07-24 22:51:30.204299] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035599 ] 00:05:57.980 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.980 [2024-07-24 22:51:30.275344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.980 [2024-07-24 22:51:30.310967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.980 [2024-07-24 22:51:30.351979] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:58.240 [2024-07-24 22:51:30.412080] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:58.240 00:05:58.240 Compression does not support the verify option, aborting. 00:05:58.240 22:51:30 -- common/autotest_common.sh@643 -- # es=161 00:05:58.240 22:51:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:58.240 22:51:30 -- common/autotest_common.sh@652 -- # es=33 00:05:58.240 22:51:30 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:58.240 22:51:30 -- common/autotest_common.sh@660 -- # es=1 00:05:58.240 22:51:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:58.240 00:05:58.240 real 0m0.302s 00:05:58.240 user 0m0.209s 00:05:58.240 sys 0m0.133s 00:05:58.240 22:51:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.240 22:51:30 -- common/autotest_common.sh@10 -- # set +x 00:05:58.240 ************************************ 00:05:58.240 END TEST accel_compress_verify 00:05:58.240 ************************************ 00:05:58.240 22:51:30 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:58.240 22:51:30 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:58.240 22:51:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.240 22:51:30 -- common/autotest_common.sh@10 -- # set +x 00:05:58.240 ************************************ 00:05:58.240 START TEST accel_wrong_workload 00:05:58.240 ************************************ 00:05:58.240 22:51:30 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:05:58.240 22:51:30 -- common/autotest_common.sh@640 -- # local es=0 00:05:58.240 22:51:30 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:58.240 22:51:30 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:58.240 22:51:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:58.240 22:51:30 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:58.240 22:51:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:58.240 22:51:30 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:05:58.240 22:51:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:58.240 22:51:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.240 22:51:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.240 22:51:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.240 22:51:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.240 22:51:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.240 22:51:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.240 22:51:30 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.240 22:51:30 -- accel/accel.sh@42 -- # jq -r . 00:05:58.240 Unsupported workload type: foobar 00:05:58.240 [2024-07-24 22:51:30.551240] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:58.240 accel_perf options: 00:05:58.240 [-h help message] 00:05:58.240 [-q queue depth per core] 00:05:58.240 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:58.240 [-T number of threads per core 00:05:58.240 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:58.240 [-t time in seconds] 00:05:58.240 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:58.240 [ dif_verify, , dif_generate, dif_generate_copy 00:05:58.240 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:58.240 [-l for compress/decompress workloads, name of uncompressed input file 00:05:58.240 [-S for crc32c workload, use this seed value (default 0) 00:05:58.240 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:58.240 [-f for fill workload, use this BYTE value (default 255) 00:05:58.240 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:58.240 [-y verify result if this switch is on] 00:05:58.240 [-a tasks to allocate per core (default: same value as -q)] 00:05:58.240 Can be used to spread operations across a wider range of memory. 00:05:58.240 22:51:30 -- common/autotest_common.sh@643 -- # es=1 00:05:58.240 22:51:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:58.240 22:51:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:58.240 22:51:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:58.240 00:05:58.240 real 0m0.036s 00:05:58.240 user 0m0.014s 00:05:58.240 sys 0m0.021s 00:05:58.240 22:51:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.240 22:51:30 -- common/autotest_common.sh@10 -- # set +x 00:05:58.240 ************************************ 00:05:58.240 END TEST accel_wrong_workload 00:05:58.240 ************************************ 00:05:58.240 Error: writing output failed: Broken pipe 00:05:58.240 22:51:30 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:58.240 22:51:30 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:58.240 22:51:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.240 22:51:30 -- common/autotest_common.sh@10 -- # set +x 00:05:58.240 ************************************ 00:05:58.240 START TEST accel_negative_buffers 00:05:58.240 ************************************ 00:05:58.240 22:51:30 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:58.240 22:51:30 -- common/autotest_common.sh@640 -- # local es=0 00:05:58.240 22:51:30 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:58.240 22:51:30 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:58.240 22:51:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:58.240 22:51:30 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:58.240 22:51:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:58.240 22:51:30 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:05:58.240 22:51:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:58.240 22:51:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.240 22:51:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.240 22:51:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.240 22:51:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.240 22:51:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.240 22:51:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.240 22:51:30 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.240 22:51:30 -- accel/accel.sh@42 -- # jq -r . 00:05:58.240 -x option must be non-negative. 00:05:58.240 [2024-07-24 22:51:30.627697] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:58.240 accel_perf options: 00:05:58.240 [-h help message] 00:05:58.240 [-q queue depth per core] 00:05:58.240 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:58.240 [-T number of threads per core 00:05:58.240 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:58.240 [-t time in seconds] 00:05:58.240 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:58.240 [ dif_verify, , dif_generate, dif_generate_copy 00:05:58.240 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:58.240 [-l for compress/decompress workloads, name of uncompressed input file 00:05:58.240 [-S for crc32c workload, use this seed value (default 0) 00:05:58.240 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:58.240 [-f for fill workload, use this BYTE value (default 255) 00:05:58.240 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:58.240 [-y verify result if this switch is on] 00:05:58.240 [-a tasks to allocate per core (default: same value as -q)] 00:05:58.241 Can be used to spread operations across a wider range of memory. 00:05:58.241 22:51:30 -- common/autotest_common.sh@643 -- # es=1 00:05:58.241 22:51:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:58.241 22:51:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:58.241 22:51:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:58.241 00:05:58.241 real 0m0.036s 00:05:58.241 user 0m0.016s 00:05:58.241 sys 0m0.020s 00:05:58.241 22:51:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.241 22:51:30 -- common/autotest_common.sh@10 -- # set +x 00:05:58.241 ************************************ 00:05:58.241 END TEST accel_negative_buffers 00:05:58.241 ************************************ 00:05:58.241 Error: writing output failed: Broken pipe 00:05:58.500 22:51:30 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:58.500 22:51:30 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:58.500 22:51:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.500 22:51:30 -- common/autotest_common.sh@10 -- # set +x 00:05:58.500 ************************************ 00:05:58.500 START TEST accel_crc32c 00:05:58.500 ************************************ 00:05:58.500 22:51:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:58.500 22:51:30 -- accel/accel.sh@16 -- # local accel_opc 00:05:58.500 22:51:30 -- accel/accel.sh@17 -- # local accel_module 00:05:58.500 22:51:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:58.500 22:51:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:58.500 22:51:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.500 22:51:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.500 22:51:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.500 22:51:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.500 22:51:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.500 22:51:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.500 22:51:30 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.500 22:51:30 -- accel/accel.sh@42 -- # jq -r . 00:05:58.500 [2024-07-24 22:51:30.706344] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:58.500 [2024-07-24 22:51:30.706414] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035852 ] 00:05:58.501 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.501 [2024-07-24 22:51:30.777997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.501 [2024-07-24 22:51:30.813390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.921 22:51:31 -- accel/accel.sh@18 -- # out=' 00:05:59.921 SPDK Configuration: 00:05:59.921 Core mask: 0x1 00:05:59.921 00:05:59.921 Accel Perf Configuration: 00:05:59.921 Workload Type: crc32c 00:05:59.921 CRC-32C seed: 32 00:05:59.921 Transfer size: 4096 bytes 00:05:59.921 Vector count 1 00:05:59.921 Module: software 00:05:59.921 Queue depth: 32 00:05:59.921 Allocate depth: 32 00:05:59.921 # threads/core: 1 00:05:59.921 Run time: 1 seconds 00:05:59.921 Verify: Yes 00:05:59.921 00:05:59.921 Running for 1 seconds... 00:05:59.921 00:05:59.921 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:59.921 ------------------------------------------------------------------------------------ 00:05:59.921 0,0 595904/s 2327 MiB/s 0 0 00:05:59.921 ==================================================================================== 00:05:59.921 Total 595904/s 2327 MiB/s 0 0' 00:05:59.921 22:51:31 -- accel/accel.sh@20 -- # IFS=: 00:05:59.921 22:51:31 -- accel/accel.sh@20 -- # read -r var val 00:05:59.921 22:51:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:59.921 22:51:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:59.921 22:51:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.921 22:51:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:59.921 22:51:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.921 22:51:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.921 22:51:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:59.921 22:51:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:59.921 22:51:31 -- accel/accel.sh@41 -- # local IFS=, 00:05:59.921 22:51:31 -- accel/accel.sh@42 -- # jq -r . 00:05:59.921 [2024-07-24 22:51:32.005679] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:59.921 [2024-07-24 22:51:32.005758] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3036034 ] 00:05:59.921 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.921 [2024-07-24 22:51:32.076447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.921 [2024-07-24 22:51:32.111497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.921 22:51:32 -- accel/accel.sh@21 -- # val= 00:05:59.921 22:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # IFS=: 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # read -r var val 00:05:59.921 22:51:32 -- accel/accel.sh@21 -- # val= 00:05:59.921 22:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # IFS=: 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # read -r var val 00:05:59.921 22:51:32 -- accel/accel.sh@21 -- # val=0x1 00:05:59.921 22:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # IFS=: 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # read -r var val 00:05:59.921 22:51:32 -- accel/accel.sh@21 -- # val= 00:05:59.921 22:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # IFS=: 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # read -r var val 00:05:59.921 22:51:32 -- accel/accel.sh@21 -- # val= 00:05:59.921 22:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # IFS=: 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # read -r var val 00:05:59.921 22:51:32 -- accel/accel.sh@21 -- # val=crc32c 00:05:59.921 22:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.921 22:51:32 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # IFS=: 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # read -r var val 00:05:59.921 22:51:32 -- accel/accel.sh@21 -- # val=32 00:05:59.921 22:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # IFS=: 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # read -r var val 00:05:59.921 22:51:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:59.921 22:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # IFS=: 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # read -r var val 00:05:59.921 22:51:32 -- accel/accel.sh@21 -- # val= 00:05:59.921 22:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # IFS=: 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # read -r var val 00:05:59.921 22:51:32 -- accel/accel.sh@21 -- # val=software 00:05:59.921 22:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.921 22:51:32 -- accel/accel.sh@23 -- # accel_module=software 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # IFS=: 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # read -r var val 00:05:59.921 22:51:32 -- accel/accel.sh@21 -- # val=32 00:05:59.921 22:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # IFS=: 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # read -r var val 00:05:59.921 22:51:32 -- accel/accel.sh@21 -- # val=32 00:05:59.921 22:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # IFS=: 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # read -r var val 00:05:59.921 22:51:32 -- accel/accel.sh@21 -- # val=1 00:05:59.921 22:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # IFS=: 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # read -r var val 00:05:59.921 22:51:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:59.921 22:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # IFS=: 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # read -r var val 00:05:59.921 22:51:32 -- accel/accel.sh@21 -- # val=Yes 00:05:59.921 22:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # IFS=: 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # read -r var val 00:05:59.921 22:51:32 -- accel/accel.sh@21 -- # val= 00:05:59.921 22:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # IFS=: 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # read -r var val 00:05:59.921 22:51:32 -- accel/accel.sh@21 -- # val= 00:05:59.921 22:51:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # IFS=: 00:05:59.921 22:51:32 -- accel/accel.sh@20 -- # read -r var val 00:06:00.859 22:51:33 -- accel/accel.sh@21 -- # val= 00:06:00.859 22:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.859 22:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:00.859 22:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:00.859 22:51:33 -- accel/accel.sh@21 -- # val= 00:06:00.859 22:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.859 22:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:00.859 22:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:00.859 22:51:33 -- accel/accel.sh@21 -- # val= 00:06:00.859 22:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.859 22:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:00.859 22:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:00.859 22:51:33 -- accel/accel.sh@21 -- # val= 00:06:00.859 22:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.859 22:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:00.859 22:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:00.859 22:51:33 -- accel/accel.sh@21 -- # val= 00:06:00.859 22:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.859 22:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:00.859 22:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:00.859 22:51:33 -- accel/accel.sh@21 -- # val= 00:06:00.859 22:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.859 22:51:33 -- accel/accel.sh@20 -- # IFS=: 00:06:00.859 22:51:33 -- accel/accel.sh@20 -- # read -r var val 00:06:00.859 22:51:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:00.859 22:51:33 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:00.859 22:51:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.859 00:06:00.859 real 0m2.604s 00:06:00.859 user 0m2.359s 00:06:00.859 sys 0m0.254s 00:06:00.859 22:51:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.859 22:51:33 -- common/autotest_common.sh@10 -- # set +x 00:06:00.859 ************************************ 00:06:00.859 END TEST accel_crc32c 00:06:00.859 ************************************ 00:06:01.119 22:51:33 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:01.119 22:51:33 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:01.119 22:51:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.119 22:51:33 -- common/autotest_common.sh@10 -- # set +x 00:06:01.119 ************************************ 00:06:01.119 START TEST accel_crc32c_C2 00:06:01.119 ************************************ 00:06:01.119 22:51:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:01.119 22:51:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:01.119 22:51:33 -- accel/accel.sh@17 -- # local accel_module 00:06:01.119 22:51:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:01.119 22:51:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:01.119 22:51:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.119 22:51:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:01.119 22:51:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.119 22:51:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.119 22:51:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:01.119 22:51:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:01.119 22:51:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:01.119 22:51:33 -- accel/accel.sh@42 -- # jq -r . 00:06:01.119 [2024-07-24 22:51:33.360688] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:01.119 [2024-07-24 22:51:33.360762] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3036231 ] 00:06:01.119 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.119 [2024-07-24 22:51:33.431928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.119 [2024-07-24 22:51:33.467709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.496 22:51:34 -- accel/accel.sh@18 -- # out=' 00:06:02.496 SPDK Configuration: 00:06:02.496 Core mask: 0x1 00:06:02.496 00:06:02.496 Accel Perf Configuration: 00:06:02.496 Workload Type: crc32c 00:06:02.496 CRC-32C seed: 0 00:06:02.496 Transfer size: 4096 bytes 00:06:02.496 Vector count 2 00:06:02.496 Module: software 00:06:02.496 Queue depth: 32 00:06:02.496 Allocate depth: 32 00:06:02.496 # threads/core: 1 00:06:02.496 Run time: 1 seconds 00:06:02.496 Verify: Yes 00:06:02.496 00:06:02.496 Running for 1 seconds... 00:06:02.496 00:06:02.496 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:02.496 ------------------------------------------------------------------------------------ 00:06:02.496 0,0 472768/s 3693 MiB/s 0 0 00:06:02.496 ==================================================================================== 00:06:02.496 Total 472768/s 1846 MiB/s 0 0' 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:02.496 22:51:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:02.496 22:51:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:02.496 22:51:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.496 22:51:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.496 22:51:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.496 22:51:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.496 22:51:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.496 22:51:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.496 22:51:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.496 22:51:34 -- accel/accel.sh@42 -- # jq -r . 00:06:02.496 [2024-07-24 22:51:34.657323] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:02.496 [2024-07-24 22:51:34.657390] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3036424 ] 00:06:02.496 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.496 [2024-07-24 22:51:34.727739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.496 [2024-07-24 22:51:34.762696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.496 22:51:34 -- accel/accel.sh@21 -- # val= 00:06:02.496 22:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:02.496 22:51:34 -- accel/accel.sh@21 -- # val= 00:06:02.496 22:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:02.496 22:51:34 -- accel/accel.sh@21 -- # val=0x1 00:06:02.496 22:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:02.496 22:51:34 -- accel/accel.sh@21 -- # val= 00:06:02.496 22:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:02.496 22:51:34 -- accel/accel.sh@21 -- # val= 00:06:02.496 22:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:02.496 22:51:34 -- accel/accel.sh@21 -- # val=crc32c 00:06:02.496 22:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.496 22:51:34 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:02.496 22:51:34 -- accel/accel.sh@21 -- # val=0 00:06:02.496 22:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:02.496 22:51:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:02.496 22:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:02.496 22:51:34 -- accel/accel.sh@21 -- # val= 00:06:02.496 22:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:02.496 22:51:34 -- accel/accel.sh@21 -- # val=software 00:06:02.496 22:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.496 22:51:34 -- accel/accel.sh@23 -- # accel_module=software 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:02.496 22:51:34 -- accel/accel.sh@21 -- # val=32 00:06:02.496 22:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:02.496 22:51:34 -- accel/accel.sh@21 -- # val=32 00:06:02.496 22:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:02.496 22:51:34 -- accel/accel.sh@21 -- # val=1 00:06:02.496 22:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:02.496 22:51:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:02.496 22:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:02.496 22:51:34 -- accel/accel.sh@21 -- # val=Yes 00:06:02.496 22:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:02.496 22:51:34 -- accel/accel.sh@21 -- # val= 00:06:02.496 22:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:02.496 22:51:34 -- accel/accel.sh@21 -- # val= 00:06:02.496 22:51:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # IFS=: 00:06:02.496 22:51:34 -- accel/accel.sh@20 -- # read -r var val 00:06:03.875 22:51:35 -- accel/accel.sh@21 -- # val= 00:06:03.875 22:51:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.875 22:51:35 -- accel/accel.sh@20 -- # IFS=: 00:06:03.875 22:51:35 -- accel/accel.sh@20 -- # read -r var val 00:06:03.875 22:51:35 -- accel/accel.sh@21 -- # val= 00:06:03.875 22:51:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.875 22:51:35 -- accel/accel.sh@20 -- # IFS=: 00:06:03.875 22:51:35 -- accel/accel.sh@20 -- # read -r var val 00:06:03.875 22:51:35 -- accel/accel.sh@21 -- # val= 00:06:03.875 22:51:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.875 22:51:35 -- accel/accel.sh@20 -- # IFS=: 00:06:03.875 22:51:35 -- accel/accel.sh@20 -- # read -r var val 00:06:03.875 22:51:35 -- accel/accel.sh@21 -- # val= 00:06:03.875 22:51:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.875 22:51:35 -- accel/accel.sh@20 -- # IFS=: 00:06:03.875 22:51:35 -- accel/accel.sh@20 -- # read -r var val 00:06:03.875 22:51:35 -- accel/accel.sh@21 -- # val= 00:06:03.875 22:51:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.875 22:51:35 -- accel/accel.sh@20 -- # IFS=: 00:06:03.875 22:51:35 -- accel/accel.sh@20 -- # read -r var val 00:06:03.875 22:51:35 -- accel/accel.sh@21 -- # val= 00:06:03.875 22:51:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.875 22:51:35 -- accel/accel.sh@20 -- # IFS=: 00:06:03.875 22:51:35 -- accel/accel.sh@20 -- # read -r var val 00:06:03.875 22:51:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:03.875 22:51:35 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:03.875 22:51:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.875 00:06:03.875 real 0m2.602s 00:06:03.875 user 0m2.358s 00:06:03.875 sys 0m0.253s 00:06:03.875 22:51:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.875 22:51:35 -- common/autotest_common.sh@10 -- # set +x 00:06:03.875 ************************************ 00:06:03.875 END TEST accel_crc32c_C2 00:06:03.875 ************************************ 00:06:03.875 22:51:35 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:03.875 22:51:35 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:03.875 22:51:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.875 22:51:35 -- common/autotest_common.sh@10 -- # set +x 00:06:03.875 ************************************ 00:06:03.875 START TEST accel_copy 00:06:03.875 ************************************ 00:06:03.875 22:51:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:03.875 22:51:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:03.875 22:51:35 -- accel/accel.sh@17 -- # local accel_module 00:06:03.875 22:51:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:03.875 22:51:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:03.875 22:51:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.875 22:51:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.875 22:51:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.875 22:51:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.875 22:51:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.875 22:51:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.875 22:51:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.875 22:51:35 -- accel/accel.sh@42 -- # jq -r . 00:06:03.875 [2024-07-24 22:51:36.008919] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:03.875 [2024-07-24 22:51:36.009002] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3036705 ] 00:06:03.875 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.875 [2024-07-24 22:51:36.078564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.875 [2024-07-24 22:51:36.113649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.255 22:51:37 -- accel/accel.sh@18 -- # out=' 00:06:05.255 SPDK Configuration: 00:06:05.255 Core mask: 0x1 00:06:05.255 00:06:05.255 Accel Perf Configuration: 00:06:05.255 Workload Type: copy 00:06:05.255 Transfer size: 4096 bytes 00:06:05.255 Vector count 1 00:06:05.255 Module: software 00:06:05.255 Queue depth: 32 00:06:05.255 Allocate depth: 32 00:06:05.255 # threads/core: 1 00:06:05.255 Run time: 1 seconds 00:06:05.255 Verify: Yes 00:06:05.255 00:06:05.255 Running for 1 seconds... 00:06:05.255 00:06:05.255 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:05.255 ------------------------------------------------------------------------------------ 00:06:05.255 0,0 441536/s 1724 MiB/s 0 0 00:06:05.255 ==================================================================================== 00:06:05.255 Total 441536/s 1724 MiB/s 0 0' 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:05.255 22:51:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:05.255 22:51:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:05.255 22:51:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.255 22:51:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.255 22:51:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.255 22:51:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.255 22:51:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.255 22:51:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.255 22:51:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.255 22:51:37 -- accel/accel.sh@42 -- # jq -r . 00:06:05.255 [2024-07-24 22:51:37.305777] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:05.255 [2024-07-24 22:51:37.305844] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3036978 ] 00:06:05.255 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.255 [2024-07-24 22:51:37.375763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.255 [2024-07-24 22:51:37.409860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.255 22:51:37 -- accel/accel.sh@21 -- # val= 00:06:05.255 22:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:05.255 22:51:37 -- accel/accel.sh@21 -- # val= 00:06:05.255 22:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:05.255 22:51:37 -- accel/accel.sh@21 -- # val=0x1 00:06:05.255 22:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:05.255 22:51:37 -- accel/accel.sh@21 -- # val= 00:06:05.255 22:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:05.255 22:51:37 -- accel/accel.sh@21 -- # val= 00:06:05.255 22:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:05.255 22:51:37 -- accel/accel.sh@21 -- # val=copy 00:06:05.255 22:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.255 22:51:37 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:05.255 22:51:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:05.255 22:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:05.255 22:51:37 -- accel/accel.sh@21 -- # val= 00:06:05.255 22:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:05.255 22:51:37 -- accel/accel.sh@21 -- # val=software 00:06:05.255 22:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.255 22:51:37 -- accel/accel.sh@23 -- # accel_module=software 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:05.255 22:51:37 -- accel/accel.sh@21 -- # val=32 00:06:05.255 22:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:05.255 22:51:37 -- accel/accel.sh@21 -- # val=32 00:06:05.255 22:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:05.255 22:51:37 -- accel/accel.sh@21 -- # val=1 00:06:05.255 22:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:05.255 22:51:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:05.255 22:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:05.255 22:51:37 -- accel/accel.sh@21 -- # val=Yes 00:06:05.255 22:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:05.255 22:51:37 -- accel/accel.sh@21 -- # val= 00:06:05.255 22:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:05.255 22:51:37 -- accel/accel.sh@21 -- # val= 00:06:05.255 22:51:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # IFS=: 00:06:05.255 22:51:37 -- accel/accel.sh@20 -- # read -r var val 00:06:06.194 22:51:38 -- accel/accel.sh@21 -- # val= 00:06:06.194 22:51:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.194 22:51:38 -- accel/accel.sh@20 -- # IFS=: 00:06:06.194 22:51:38 -- accel/accel.sh@20 -- # read -r var val 00:06:06.194 22:51:38 -- accel/accel.sh@21 -- # val= 00:06:06.194 22:51:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.194 22:51:38 -- accel/accel.sh@20 -- # IFS=: 00:06:06.194 22:51:38 -- accel/accel.sh@20 -- # read -r var val 00:06:06.194 22:51:38 -- accel/accel.sh@21 -- # val= 00:06:06.194 22:51:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.194 22:51:38 -- accel/accel.sh@20 -- # IFS=: 00:06:06.194 22:51:38 -- accel/accel.sh@20 -- # read -r var val 00:06:06.194 22:51:38 -- accel/accel.sh@21 -- # val= 00:06:06.194 22:51:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.194 22:51:38 -- accel/accel.sh@20 -- # IFS=: 00:06:06.194 22:51:38 -- accel/accel.sh@20 -- # read -r var val 00:06:06.194 22:51:38 -- accel/accel.sh@21 -- # val= 00:06:06.194 22:51:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.194 22:51:38 -- accel/accel.sh@20 -- # IFS=: 00:06:06.194 22:51:38 -- accel/accel.sh@20 -- # read -r var val 00:06:06.194 22:51:38 -- accel/accel.sh@21 -- # val= 00:06:06.194 22:51:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.194 22:51:38 -- accel/accel.sh@20 -- # IFS=: 00:06:06.194 22:51:38 -- accel/accel.sh@20 -- # read -r var val 00:06:06.194 22:51:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:06.194 22:51:38 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:06.194 22:51:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.194 00:06:06.194 real 0m2.598s 00:06:06.194 user 0m2.361s 00:06:06.194 sys 0m0.246s 00:06:06.194 22:51:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.194 22:51:38 -- common/autotest_common.sh@10 -- # set +x 00:06:06.194 ************************************ 00:06:06.194 END TEST accel_copy 00:06:06.194 ************************************ 00:06:06.194 22:51:38 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:06.194 22:51:38 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:06.194 22:51:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.194 22:51:38 -- common/autotest_common.sh@10 -- # set +x 00:06:06.453 ************************************ 00:06:06.453 START TEST accel_fill 00:06:06.453 ************************************ 00:06:06.453 22:51:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:06.453 22:51:38 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.453 22:51:38 -- accel/accel.sh@17 -- # local accel_module 00:06:06.453 22:51:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:06.453 22:51:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:06.453 22:51:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.453 22:51:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.453 22:51:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.453 22:51:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.453 22:51:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.453 22:51:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.453 22:51:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.453 22:51:38 -- accel/accel.sh@42 -- # jq -r . 00:06:06.453 [2024-07-24 22:51:38.658763] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:06.453 [2024-07-24 22:51:38.658855] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3037257 ] 00:06:06.453 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.453 [2024-07-24 22:51:38.728787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.453 [2024-07-24 22:51:38.763734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.831 22:51:39 -- accel/accel.sh@18 -- # out=' 00:06:07.831 SPDK Configuration: 00:06:07.831 Core mask: 0x1 00:06:07.831 00:06:07.831 Accel Perf Configuration: 00:06:07.831 Workload Type: fill 00:06:07.831 Fill pattern: 0x80 00:06:07.831 Transfer size: 4096 bytes 00:06:07.831 Vector count 1 00:06:07.831 Module: software 00:06:07.831 Queue depth: 64 00:06:07.831 Allocate depth: 64 00:06:07.831 # threads/core: 1 00:06:07.831 Run time: 1 seconds 00:06:07.831 Verify: Yes 00:06:07.831 00:06:07.831 Running for 1 seconds... 00:06:07.831 00:06:07.831 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:07.831 ------------------------------------------------------------------------------------ 00:06:07.831 0,0 704320/s 2751 MiB/s 0 0 00:06:07.831 ==================================================================================== 00:06:07.831 Total 704320/s 2751 MiB/s 0 0' 00:06:07.831 22:51:39 -- accel/accel.sh@20 -- # IFS=: 00:06:07.831 22:51:39 -- accel/accel.sh@20 -- # read -r var val 00:06:07.831 22:51:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:07.831 22:51:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:07.831 22:51:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.831 22:51:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.831 22:51:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.831 22:51:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.831 22:51:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.831 22:51:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.831 22:51:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.831 22:51:39 -- accel/accel.sh@42 -- # jq -r . 00:06:07.831 [2024-07-24 22:51:39.956977] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:07.831 [2024-07-24 22:51:39.957061] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3037521 ] 00:06:07.831 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.831 [2024-07-24 22:51:40.031720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.831 [2024-07-24 22:51:40.070378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.831 22:51:40 -- accel/accel.sh@21 -- # val= 00:06:07.831 22:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.831 22:51:40 -- accel/accel.sh@21 -- # val= 00:06:07.831 22:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.831 22:51:40 -- accel/accel.sh@21 -- # val=0x1 00:06:07.831 22:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.831 22:51:40 -- accel/accel.sh@21 -- # val= 00:06:07.831 22:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.831 22:51:40 -- accel/accel.sh@21 -- # val= 00:06:07.831 22:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.831 22:51:40 -- accel/accel.sh@21 -- # val=fill 00:06:07.831 22:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.831 22:51:40 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.831 22:51:40 -- accel/accel.sh@21 -- # val=0x80 00:06:07.831 22:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.831 22:51:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:07.831 22:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.831 22:51:40 -- accel/accel.sh@21 -- # val= 00:06:07.831 22:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.831 22:51:40 -- accel/accel.sh@21 -- # val=software 00:06:07.831 22:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.831 22:51:40 -- accel/accel.sh@23 -- # accel_module=software 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.831 22:51:40 -- accel/accel.sh@21 -- # val=64 00:06:07.831 22:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.831 22:51:40 -- accel/accel.sh@21 -- # val=64 00:06:07.831 22:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.831 22:51:40 -- accel/accel.sh@21 -- # val=1 00:06:07.831 22:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.831 22:51:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:07.831 22:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.831 22:51:40 -- accel/accel.sh@21 -- # val=Yes 00:06:07.831 22:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.831 22:51:40 -- accel/accel.sh@21 -- # val= 00:06:07.831 22:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.831 22:51:40 -- accel/accel.sh@21 -- # val= 00:06:07.831 22:51:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.831 22:51:40 -- accel/accel.sh@20 -- # read -r var val 00:06:09.210 22:51:41 -- accel/accel.sh@21 -- # val= 00:06:09.210 22:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.210 22:51:41 -- accel/accel.sh@20 -- # IFS=: 00:06:09.210 22:51:41 -- accel/accel.sh@20 -- # read -r var val 00:06:09.210 22:51:41 -- accel/accel.sh@21 -- # val= 00:06:09.210 22:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.210 22:51:41 -- accel/accel.sh@20 -- # IFS=: 00:06:09.210 22:51:41 -- accel/accel.sh@20 -- # read -r var val 00:06:09.210 22:51:41 -- accel/accel.sh@21 -- # val= 00:06:09.210 22:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.210 22:51:41 -- accel/accel.sh@20 -- # IFS=: 00:06:09.210 22:51:41 -- accel/accel.sh@20 -- # read -r var val 00:06:09.210 22:51:41 -- accel/accel.sh@21 -- # val= 00:06:09.210 22:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.210 22:51:41 -- accel/accel.sh@20 -- # IFS=: 00:06:09.210 22:51:41 -- accel/accel.sh@20 -- # read -r var val 00:06:09.210 22:51:41 -- accel/accel.sh@21 -- # val= 00:06:09.210 22:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.210 22:51:41 -- accel/accel.sh@20 -- # IFS=: 00:06:09.210 22:51:41 -- accel/accel.sh@20 -- # read -r var val 00:06:09.210 22:51:41 -- accel/accel.sh@21 -- # val= 00:06:09.210 22:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.210 22:51:41 -- accel/accel.sh@20 -- # IFS=: 00:06:09.210 22:51:41 -- accel/accel.sh@20 -- # read -r var val 00:06:09.210 22:51:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:09.210 22:51:41 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:09.210 22:51:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.210 00:06:09.210 real 0m2.614s 00:06:09.210 user 0m2.359s 00:06:09.210 sys 0m0.262s 00:06:09.210 22:51:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.210 22:51:41 -- common/autotest_common.sh@10 -- # set +x 00:06:09.210 ************************************ 00:06:09.210 END TEST accel_fill 00:06:09.210 ************************************ 00:06:09.210 22:51:41 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:09.210 22:51:41 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:09.210 22:51:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.210 22:51:41 -- common/autotest_common.sh@10 -- # set +x 00:06:09.210 ************************************ 00:06:09.210 START TEST accel_copy_crc32c 00:06:09.210 ************************************ 00:06:09.210 22:51:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:09.210 22:51:41 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.210 22:51:41 -- accel/accel.sh@17 -- # local accel_module 00:06:09.210 22:51:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:09.210 22:51:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:09.210 22:51:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.210 22:51:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.210 22:51:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.210 22:51:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.210 22:51:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.210 22:51:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.210 22:51:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.210 22:51:41 -- accel/accel.sh@42 -- # jq -r . 00:06:09.210 [2024-07-24 22:51:41.313280] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:09.210 [2024-07-24 22:51:41.313363] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3037764 ] 00:06:09.210 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.210 [2024-07-24 22:51:41.385786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.210 [2024-07-24 22:51:41.421288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.591 22:51:42 -- accel/accel.sh@18 -- # out=' 00:06:10.591 SPDK Configuration: 00:06:10.591 Core mask: 0x1 00:06:10.591 00:06:10.591 Accel Perf Configuration: 00:06:10.591 Workload Type: copy_crc32c 00:06:10.591 CRC-32C seed: 0 00:06:10.591 Vector size: 4096 bytes 00:06:10.591 Transfer size: 4096 bytes 00:06:10.591 Vector count 1 00:06:10.591 Module: software 00:06:10.591 Queue depth: 32 00:06:10.591 Allocate depth: 32 00:06:10.591 # threads/core: 1 00:06:10.591 Run time: 1 seconds 00:06:10.591 Verify: Yes 00:06:10.591 00:06:10.591 Running for 1 seconds... 00:06:10.591 00:06:10.591 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:10.591 ------------------------------------------------------------------------------------ 00:06:10.591 0,0 347680/s 1358 MiB/s 0 0 00:06:10.591 ==================================================================================== 00:06:10.591 Total 347680/s 1358 MiB/s 0 0' 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # IFS=: 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # read -r var val 00:06:10.591 22:51:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:10.591 22:51:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:10.591 22:51:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.591 22:51:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.591 22:51:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.591 22:51:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.591 22:51:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.591 22:51:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.591 22:51:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.591 22:51:42 -- accel/accel.sh@42 -- # jq -r . 00:06:10.591 [2024-07-24 22:51:42.612868] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:10.591 [2024-07-24 22:51:42.612934] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3037912 ] 00:06:10.591 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.591 [2024-07-24 22:51:42.683412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.591 [2024-07-24 22:51:42.717897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.591 22:51:42 -- accel/accel.sh@21 -- # val= 00:06:10.591 22:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # IFS=: 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # read -r var val 00:06:10.591 22:51:42 -- accel/accel.sh@21 -- # val= 00:06:10.591 22:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # IFS=: 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # read -r var val 00:06:10.591 22:51:42 -- accel/accel.sh@21 -- # val=0x1 00:06:10.591 22:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # IFS=: 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # read -r var val 00:06:10.591 22:51:42 -- accel/accel.sh@21 -- # val= 00:06:10.591 22:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # IFS=: 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # read -r var val 00:06:10.591 22:51:42 -- accel/accel.sh@21 -- # val= 00:06:10.591 22:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # IFS=: 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # read -r var val 00:06:10.591 22:51:42 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:10.591 22:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.591 22:51:42 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # IFS=: 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # read -r var val 00:06:10.591 22:51:42 -- accel/accel.sh@21 -- # val=0 00:06:10.591 22:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # IFS=: 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # read -r var val 00:06:10.591 22:51:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:10.591 22:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # IFS=: 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # read -r var val 00:06:10.591 22:51:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:10.591 22:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # IFS=: 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # read -r var val 00:06:10.591 22:51:42 -- accel/accel.sh@21 -- # val= 00:06:10.591 22:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # IFS=: 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # read -r var val 00:06:10.591 22:51:42 -- accel/accel.sh@21 -- # val=software 00:06:10.591 22:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.591 22:51:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # IFS=: 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # read -r var val 00:06:10.591 22:51:42 -- accel/accel.sh@21 -- # val=32 00:06:10.591 22:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # IFS=: 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # read -r var val 00:06:10.591 22:51:42 -- accel/accel.sh@21 -- # val=32 00:06:10.591 22:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # IFS=: 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # read -r var val 00:06:10.591 22:51:42 -- accel/accel.sh@21 -- # val=1 00:06:10.591 22:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # IFS=: 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # read -r var val 00:06:10.591 22:51:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:10.591 22:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # IFS=: 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # read -r var val 00:06:10.591 22:51:42 -- accel/accel.sh@21 -- # val=Yes 00:06:10.591 22:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # IFS=: 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # read -r var val 00:06:10.591 22:51:42 -- accel/accel.sh@21 -- # val= 00:06:10.591 22:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # IFS=: 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # read -r var val 00:06:10.591 22:51:42 -- accel/accel.sh@21 -- # val= 00:06:10.591 22:51:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # IFS=: 00:06:10.591 22:51:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.532 22:51:43 -- accel/accel.sh@21 -- # val= 00:06:11.532 22:51:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.532 22:51:43 -- accel/accel.sh@20 -- # IFS=: 00:06:11.532 22:51:43 -- accel/accel.sh@20 -- # read -r var val 00:06:11.532 22:51:43 -- accel/accel.sh@21 -- # val= 00:06:11.532 22:51:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.532 22:51:43 -- accel/accel.sh@20 -- # IFS=: 00:06:11.532 22:51:43 -- accel/accel.sh@20 -- # read -r var val 00:06:11.532 22:51:43 -- accel/accel.sh@21 -- # val= 00:06:11.532 22:51:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.532 22:51:43 -- accel/accel.sh@20 -- # IFS=: 00:06:11.532 22:51:43 -- accel/accel.sh@20 -- # read -r var val 00:06:11.532 22:51:43 -- accel/accel.sh@21 -- # val= 00:06:11.532 22:51:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.532 22:51:43 -- accel/accel.sh@20 -- # IFS=: 00:06:11.532 22:51:43 -- accel/accel.sh@20 -- # read -r var val 00:06:11.532 22:51:43 -- accel/accel.sh@21 -- # val= 00:06:11.532 22:51:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.532 22:51:43 -- accel/accel.sh@20 -- # IFS=: 00:06:11.532 22:51:43 -- accel/accel.sh@20 -- # read -r var val 00:06:11.532 22:51:43 -- accel/accel.sh@21 -- # val= 00:06:11.532 22:51:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.532 22:51:43 -- accel/accel.sh@20 -- # IFS=: 00:06:11.532 22:51:43 -- accel/accel.sh@20 -- # read -r var val 00:06:11.532 22:51:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:11.532 22:51:43 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:11.532 22:51:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.532 00:06:11.532 real 0m2.602s 00:06:11.532 user 0m2.361s 00:06:11.532 sys 0m0.251s 00:06:11.532 22:51:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.532 22:51:43 -- common/autotest_common.sh@10 -- # set +x 00:06:11.532 ************************************ 00:06:11.532 END TEST accel_copy_crc32c 00:06:11.532 ************************************ 00:06:11.532 22:51:43 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:11.532 22:51:43 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:11.532 22:51:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.532 22:51:43 -- common/autotest_common.sh@10 -- # set +x 00:06:11.532 ************************************ 00:06:11.532 START TEST accel_copy_crc32c_C2 00:06:11.532 ************************************ 00:06:11.532 22:51:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:11.532 22:51:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:11.532 22:51:43 -- accel/accel.sh@17 -- # local accel_module 00:06:11.532 22:51:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:11.532 22:51:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:11.532 22:51:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.532 22:51:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.532 22:51:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.532 22:51:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.532 22:51:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.532 22:51:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.532 22:51:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.532 22:51:43 -- accel/accel.sh@42 -- # jq -r . 00:06:11.791 [2024-07-24 22:51:43.966181] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:11.791 [2024-07-24 22:51:43.966250] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3038129 ] 00:06:11.791 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.791 [2024-07-24 22:51:44.036557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.791 [2024-07-24 22:51:44.071251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.168 22:51:45 -- accel/accel.sh@18 -- # out=' 00:06:13.168 SPDK Configuration: 00:06:13.168 Core mask: 0x1 00:06:13.168 00:06:13.168 Accel Perf Configuration: 00:06:13.168 Workload Type: copy_crc32c 00:06:13.168 CRC-32C seed: 0 00:06:13.168 Vector size: 4096 bytes 00:06:13.168 Transfer size: 8192 bytes 00:06:13.168 Vector count 2 00:06:13.168 Module: software 00:06:13.168 Queue depth: 32 00:06:13.168 Allocate depth: 32 00:06:13.168 # threads/core: 1 00:06:13.168 Run time: 1 seconds 00:06:13.168 Verify: Yes 00:06:13.168 00:06:13.168 Running for 1 seconds... 00:06:13.168 00:06:13.168 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:13.168 ------------------------------------------------------------------------------------ 00:06:13.168 0,0 248832/s 1944 MiB/s 0 0 00:06:13.168 ==================================================================================== 00:06:13.168 Total 248832/s 972 MiB/s 0 0' 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # IFS=: 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # read -r var val 00:06:13.168 22:51:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:13.168 22:51:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:13.168 22:51:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.168 22:51:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.168 22:51:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.168 22:51:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.168 22:51:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.168 22:51:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.168 22:51:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.168 22:51:45 -- accel/accel.sh@42 -- # jq -r . 00:06:13.168 [2024-07-24 22:51:45.261436] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:13.168 [2024-07-24 22:51:45.261502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3038377 ] 00:06:13.168 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.168 [2024-07-24 22:51:45.329655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.168 [2024-07-24 22:51:45.363513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.168 22:51:45 -- accel/accel.sh@21 -- # val= 00:06:13.168 22:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # IFS=: 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # read -r var val 00:06:13.168 22:51:45 -- accel/accel.sh@21 -- # val= 00:06:13.168 22:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # IFS=: 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # read -r var val 00:06:13.168 22:51:45 -- accel/accel.sh@21 -- # val=0x1 00:06:13.168 22:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # IFS=: 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # read -r var val 00:06:13.168 22:51:45 -- accel/accel.sh@21 -- # val= 00:06:13.168 22:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # IFS=: 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # read -r var val 00:06:13.168 22:51:45 -- accel/accel.sh@21 -- # val= 00:06:13.168 22:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # IFS=: 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # read -r var val 00:06:13.168 22:51:45 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:13.168 22:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.168 22:51:45 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # IFS=: 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # read -r var val 00:06:13.168 22:51:45 -- accel/accel.sh@21 -- # val=0 00:06:13.168 22:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # IFS=: 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # read -r var val 00:06:13.168 22:51:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:13.168 22:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # IFS=: 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # read -r var val 00:06:13.168 22:51:45 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:13.168 22:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # IFS=: 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # read -r var val 00:06:13.168 22:51:45 -- accel/accel.sh@21 -- # val= 00:06:13.168 22:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # IFS=: 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # read -r var val 00:06:13.168 22:51:45 -- accel/accel.sh@21 -- # val=software 00:06:13.168 22:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.168 22:51:45 -- accel/accel.sh@23 -- # accel_module=software 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # IFS=: 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # read -r var val 00:06:13.168 22:51:45 -- accel/accel.sh@21 -- # val=32 00:06:13.168 22:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # IFS=: 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # read -r var val 00:06:13.168 22:51:45 -- accel/accel.sh@21 -- # val=32 00:06:13.168 22:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # IFS=: 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # read -r var val 00:06:13.168 22:51:45 -- accel/accel.sh@21 -- # val=1 00:06:13.168 22:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # IFS=: 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # read -r var val 00:06:13.168 22:51:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:13.168 22:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # IFS=: 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # read -r var val 00:06:13.168 22:51:45 -- accel/accel.sh@21 -- # val=Yes 00:06:13.168 22:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # IFS=: 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # read -r var val 00:06:13.168 22:51:45 -- accel/accel.sh@21 -- # val= 00:06:13.168 22:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # IFS=: 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # read -r var val 00:06:13.168 22:51:45 -- accel/accel.sh@21 -- # val= 00:06:13.168 22:51:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # IFS=: 00:06:13.168 22:51:45 -- accel/accel.sh@20 -- # read -r var val 00:06:14.104 22:51:46 -- accel/accel.sh@21 -- # val= 00:06:14.104 22:51:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.104 22:51:46 -- accel/accel.sh@20 -- # IFS=: 00:06:14.104 22:51:46 -- accel/accel.sh@20 -- # read -r var val 00:06:14.104 22:51:46 -- accel/accel.sh@21 -- # val= 00:06:14.104 22:51:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.104 22:51:46 -- accel/accel.sh@20 -- # IFS=: 00:06:14.104 22:51:46 -- accel/accel.sh@20 -- # read -r var val 00:06:14.104 22:51:46 -- accel/accel.sh@21 -- # val= 00:06:14.104 22:51:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.104 22:51:46 -- accel/accel.sh@20 -- # IFS=: 00:06:14.104 22:51:46 -- accel/accel.sh@20 -- # read -r var val 00:06:14.104 22:51:46 -- accel/accel.sh@21 -- # val= 00:06:14.104 22:51:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.104 22:51:46 -- accel/accel.sh@20 -- # IFS=: 00:06:14.104 22:51:46 -- accel/accel.sh@20 -- # read -r var val 00:06:14.104 22:51:46 -- accel/accel.sh@21 -- # val= 00:06:14.104 22:51:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.104 22:51:46 -- accel/accel.sh@20 -- # IFS=: 00:06:14.104 22:51:46 -- accel/accel.sh@20 -- # read -r var val 00:06:14.104 22:51:46 -- accel/accel.sh@21 -- # val= 00:06:14.104 22:51:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.104 22:51:46 -- accel/accel.sh@20 -- # IFS=: 00:06:14.104 22:51:46 -- accel/accel.sh@20 -- # read -r var val 00:06:14.104 22:51:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:14.104 22:51:46 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:14.104 22:51:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.104 00:06:14.104 real 0m2.597s 00:06:14.104 user 0m2.345s 00:06:14.104 sys 0m0.263s 00:06:14.104 22:51:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.104 22:51:46 -- common/autotest_common.sh@10 -- # set +x 00:06:14.104 ************************************ 00:06:14.104 END TEST accel_copy_crc32c_C2 00:06:14.104 ************************************ 00:06:14.363 22:51:46 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:14.363 22:51:46 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:14.363 22:51:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:14.363 22:51:46 -- common/autotest_common.sh@10 -- # set +x 00:06:14.363 ************************************ 00:06:14.363 START TEST accel_dualcast 00:06:14.363 ************************************ 00:06:14.363 22:51:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:14.363 22:51:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.363 22:51:46 -- accel/accel.sh@17 -- # local accel_module 00:06:14.363 22:51:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:14.363 22:51:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:14.363 22:51:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.363 22:51:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.363 22:51:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.363 22:51:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.363 22:51:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.363 22:51:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.363 22:51:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.363 22:51:46 -- accel/accel.sh@42 -- # jq -r . 00:06:14.363 [2024-07-24 22:51:46.608313] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:14.363 [2024-07-24 22:51:46.608381] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3038664 ] 00:06:14.363 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.363 [2024-07-24 22:51:46.677756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.363 [2024-07-24 22:51:46.712563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.741 22:51:47 -- accel/accel.sh@18 -- # out=' 00:06:15.741 SPDK Configuration: 00:06:15.741 Core mask: 0x1 00:06:15.741 00:06:15.741 Accel Perf Configuration: 00:06:15.741 Workload Type: dualcast 00:06:15.741 Transfer size: 4096 bytes 00:06:15.741 Vector count 1 00:06:15.741 Module: software 00:06:15.741 Queue depth: 32 00:06:15.741 Allocate depth: 32 00:06:15.741 # threads/core: 1 00:06:15.741 Run time: 1 seconds 00:06:15.741 Verify: Yes 00:06:15.741 00:06:15.741 Running for 1 seconds... 00:06:15.741 00:06:15.741 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:15.741 ------------------------------------------------------------------------------------ 00:06:15.741 0,0 532448/s 2079 MiB/s 0 0 00:06:15.741 ==================================================================================== 00:06:15.741 Total 532448/s 2079 MiB/s 0 0' 00:06:15.741 22:51:47 -- accel/accel.sh@20 -- # IFS=: 00:06:15.741 22:51:47 -- accel/accel.sh@20 -- # read -r var val 00:06:15.741 22:51:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:15.741 22:51:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:15.741 22:51:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.741 22:51:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.741 22:51:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.741 22:51:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.741 22:51:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.741 22:51:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.741 22:51:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.741 22:51:47 -- accel/accel.sh@42 -- # jq -r . 00:06:15.741 [2024-07-24 22:51:47.901333] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:15.741 [2024-07-24 22:51:47.901396] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3038932 ] 00:06:15.741 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.741 [2024-07-24 22:51:47.969080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.741 [2024-07-24 22:51:48.002993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.741 22:51:48 -- accel/accel.sh@21 -- # val= 00:06:15.741 22:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # read -r var val 00:06:15.741 22:51:48 -- accel/accel.sh@21 -- # val= 00:06:15.741 22:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # read -r var val 00:06:15.741 22:51:48 -- accel/accel.sh@21 -- # val=0x1 00:06:15.741 22:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # read -r var val 00:06:15.741 22:51:48 -- accel/accel.sh@21 -- # val= 00:06:15.741 22:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # read -r var val 00:06:15.741 22:51:48 -- accel/accel.sh@21 -- # val= 00:06:15.741 22:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # read -r var val 00:06:15.741 22:51:48 -- accel/accel.sh@21 -- # val=dualcast 00:06:15.741 22:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.741 22:51:48 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # read -r var val 00:06:15.741 22:51:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:15.741 22:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # read -r var val 00:06:15.741 22:51:48 -- accel/accel.sh@21 -- # val= 00:06:15.741 22:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # read -r var val 00:06:15.741 22:51:48 -- accel/accel.sh@21 -- # val=software 00:06:15.741 22:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.741 22:51:48 -- accel/accel.sh@23 -- # accel_module=software 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # read -r var val 00:06:15.741 22:51:48 -- accel/accel.sh@21 -- # val=32 00:06:15.741 22:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # read -r var val 00:06:15.741 22:51:48 -- accel/accel.sh@21 -- # val=32 00:06:15.741 22:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # read -r var val 00:06:15.741 22:51:48 -- accel/accel.sh@21 -- # val=1 00:06:15.741 22:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # read -r var val 00:06:15.741 22:51:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:15.741 22:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # read -r var val 00:06:15.741 22:51:48 -- accel/accel.sh@21 -- # val=Yes 00:06:15.741 22:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # read -r var val 00:06:15.741 22:51:48 -- accel/accel.sh@21 -- # val= 00:06:15.741 22:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # read -r var val 00:06:15.741 22:51:48 -- accel/accel.sh@21 -- # val= 00:06:15.741 22:51:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.741 22:51:48 -- accel/accel.sh@20 -- # read -r var val 00:06:17.119 22:51:49 -- accel/accel.sh@21 -- # val= 00:06:17.119 22:51:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.119 22:51:49 -- accel/accel.sh@20 -- # IFS=: 00:06:17.119 22:51:49 -- accel/accel.sh@20 -- # read -r var val 00:06:17.119 22:51:49 -- accel/accel.sh@21 -- # val= 00:06:17.119 22:51:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.119 22:51:49 -- accel/accel.sh@20 -- # IFS=: 00:06:17.119 22:51:49 -- accel/accel.sh@20 -- # read -r var val 00:06:17.119 22:51:49 -- accel/accel.sh@21 -- # val= 00:06:17.119 22:51:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.119 22:51:49 -- accel/accel.sh@20 -- # IFS=: 00:06:17.119 22:51:49 -- accel/accel.sh@20 -- # read -r var val 00:06:17.119 22:51:49 -- accel/accel.sh@21 -- # val= 00:06:17.119 22:51:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.119 22:51:49 -- accel/accel.sh@20 -- # IFS=: 00:06:17.119 22:51:49 -- accel/accel.sh@20 -- # read -r var val 00:06:17.119 22:51:49 -- accel/accel.sh@21 -- # val= 00:06:17.119 22:51:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.119 22:51:49 -- accel/accel.sh@20 -- # IFS=: 00:06:17.119 22:51:49 -- accel/accel.sh@20 -- # read -r var val 00:06:17.119 22:51:49 -- accel/accel.sh@21 -- # val= 00:06:17.119 22:51:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.119 22:51:49 -- accel/accel.sh@20 -- # IFS=: 00:06:17.119 22:51:49 -- accel/accel.sh@20 -- # read -r var val 00:06:17.119 22:51:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:17.119 22:51:49 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:17.119 22:51:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.119 00:06:17.119 real 0m2.591s 00:06:17.119 user 0m2.341s 00:06:17.119 sys 0m0.259s 00:06:17.119 22:51:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.119 22:51:49 -- common/autotest_common.sh@10 -- # set +x 00:06:17.119 ************************************ 00:06:17.119 END TEST accel_dualcast 00:06:17.119 ************************************ 00:06:17.119 22:51:49 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:17.119 22:51:49 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:17.119 22:51:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.119 22:51:49 -- common/autotest_common.sh@10 -- # set +x 00:06:17.119 ************************************ 00:06:17.119 START TEST accel_compare 00:06:17.119 ************************************ 00:06:17.119 22:51:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:17.119 22:51:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.119 22:51:49 -- accel/accel.sh@17 -- # local accel_module 00:06:17.119 22:51:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:17.119 22:51:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:17.119 22:51:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.119 22:51:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.119 22:51:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.119 22:51:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.119 22:51:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.119 22:51:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.119 22:51:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.119 22:51:49 -- accel/accel.sh@42 -- # jq -r . 00:06:17.119 [2024-07-24 22:51:49.246062] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:17.119 [2024-07-24 22:51:49.246128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3039213 ] 00:06:17.119 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.119 [2024-07-24 22:51:49.315036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.119 [2024-07-24 22:51:49.349825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.087 22:51:50 -- accel/accel.sh@18 -- # out=' 00:06:18.087 SPDK Configuration: 00:06:18.087 Core mask: 0x1 00:06:18.087 00:06:18.087 Accel Perf Configuration: 00:06:18.087 Workload Type: compare 00:06:18.087 Transfer size: 4096 bytes 00:06:18.087 Vector count 1 00:06:18.087 Module: software 00:06:18.087 Queue depth: 32 00:06:18.087 Allocate depth: 32 00:06:18.087 # threads/core: 1 00:06:18.087 Run time: 1 seconds 00:06:18.087 Verify: Yes 00:06:18.087 00:06:18.087 Running for 1 seconds... 00:06:18.087 00:06:18.087 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:18.087 ------------------------------------------------------------------------------------ 00:06:18.087 0,0 636128/s 2484 MiB/s 0 0 00:06:18.087 ==================================================================================== 00:06:18.087 Total 636128/s 2484 MiB/s 0 0' 00:06:18.087 22:51:50 -- accel/accel.sh@20 -- # IFS=: 00:06:18.087 22:51:50 -- accel/accel.sh@20 -- # read -r var val 00:06:18.087 22:51:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:18.087 22:51:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:18.347 22:51:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.347 22:51:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.347 22:51:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.347 22:51:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.347 22:51:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.347 22:51:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.347 22:51:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.347 22:51:50 -- accel/accel.sh@42 -- # jq -r . 00:06:18.347 [2024-07-24 22:51:50.539416] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:18.347 [2024-07-24 22:51:50.539483] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3039473 ] 00:06:18.347 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.347 [2024-07-24 22:51:50.607660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.347 [2024-07-24 22:51:50.642489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.347 22:51:50 -- accel/accel.sh@21 -- # val= 00:06:18.347 22:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # IFS=: 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # read -r var val 00:06:18.347 22:51:50 -- accel/accel.sh@21 -- # val= 00:06:18.347 22:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # IFS=: 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # read -r var val 00:06:18.347 22:51:50 -- accel/accel.sh@21 -- # val=0x1 00:06:18.347 22:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # IFS=: 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # read -r var val 00:06:18.347 22:51:50 -- accel/accel.sh@21 -- # val= 00:06:18.347 22:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # IFS=: 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # read -r var val 00:06:18.347 22:51:50 -- accel/accel.sh@21 -- # val= 00:06:18.347 22:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # IFS=: 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # read -r var val 00:06:18.347 22:51:50 -- accel/accel.sh@21 -- # val=compare 00:06:18.347 22:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.347 22:51:50 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # IFS=: 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # read -r var val 00:06:18.347 22:51:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:18.347 22:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # IFS=: 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # read -r var val 00:06:18.347 22:51:50 -- accel/accel.sh@21 -- # val= 00:06:18.347 22:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # IFS=: 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # read -r var val 00:06:18.347 22:51:50 -- accel/accel.sh@21 -- # val=software 00:06:18.347 22:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.347 22:51:50 -- accel/accel.sh@23 -- # accel_module=software 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # IFS=: 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # read -r var val 00:06:18.347 22:51:50 -- accel/accel.sh@21 -- # val=32 00:06:18.347 22:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # IFS=: 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # read -r var val 00:06:18.347 22:51:50 -- accel/accel.sh@21 -- # val=32 00:06:18.347 22:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # IFS=: 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # read -r var val 00:06:18.347 22:51:50 -- accel/accel.sh@21 -- # val=1 00:06:18.347 22:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # IFS=: 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # read -r var val 00:06:18.347 22:51:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:18.347 22:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # IFS=: 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # read -r var val 00:06:18.347 22:51:50 -- accel/accel.sh@21 -- # val=Yes 00:06:18.347 22:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # IFS=: 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # read -r var val 00:06:18.347 22:51:50 -- accel/accel.sh@21 -- # val= 00:06:18.347 22:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # IFS=: 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # read -r var val 00:06:18.347 22:51:50 -- accel/accel.sh@21 -- # val= 00:06:18.347 22:51:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # IFS=: 00:06:18.347 22:51:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.726 22:51:51 -- accel/accel.sh@21 -- # val= 00:06:19.726 22:51:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.726 22:51:51 -- accel/accel.sh@20 -- # IFS=: 00:06:19.726 22:51:51 -- accel/accel.sh@20 -- # read -r var val 00:06:19.726 22:51:51 -- accel/accel.sh@21 -- # val= 00:06:19.726 22:51:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.726 22:51:51 -- accel/accel.sh@20 -- # IFS=: 00:06:19.726 22:51:51 -- accel/accel.sh@20 -- # read -r var val 00:06:19.726 22:51:51 -- accel/accel.sh@21 -- # val= 00:06:19.726 22:51:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.726 22:51:51 -- accel/accel.sh@20 -- # IFS=: 00:06:19.726 22:51:51 -- accel/accel.sh@20 -- # read -r var val 00:06:19.726 22:51:51 -- accel/accel.sh@21 -- # val= 00:06:19.726 22:51:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.726 22:51:51 -- accel/accel.sh@20 -- # IFS=: 00:06:19.726 22:51:51 -- accel/accel.sh@20 -- # read -r var val 00:06:19.726 22:51:51 -- accel/accel.sh@21 -- # val= 00:06:19.726 22:51:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.726 22:51:51 -- accel/accel.sh@20 -- # IFS=: 00:06:19.726 22:51:51 -- accel/accel.sh@20 -- # read -r var val 00:06:19.726 22:51:51 -- accel/accel.sh@21 -- # val= 00:06:19.726 22:51:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.726 22:51:51 -- accel/accel.sh@20 -- # IFS=: 00:06:19.726 22:51:51 -- accel/accel.sh@20 -- # read -r var val 00:06:19.726 22:51:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:19.726 22:51:51 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:19.726 22:51:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.726 00:06:19.726 real 0m2.592s 00:06:19.726 user 0m2.351s 00:06:19.726 sys 0m0.249s 00:06:19.726 22:51:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.726 22:51:51 -- common/autotest_common.sh@10 -- # set +x 00:06:19.726 ************************************ 00:06:19.726 END TEST accel_compare 00:06:19.726 ************************************ 00:06:19.726 22:51:51 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:19.726 22:51:51 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:19.726 22:51:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.726 22:51:51 -- common/autotest_common.sh@10 -- # set +x 00:06:19.726 ************************************ 00:06:19.726 START TEST accel_xor 00:06:19.726 ************************************ 00:06:19.726 22:51:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:06:19.726 22:51:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.726 22:51:51 -- accel/accel.sh@17 -- # local accel_module 00:06:19.726 22:51:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:19.726 22:51:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:19.726 22:51:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.726 22:51:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.726 22:51:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.726 22:51:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.726 22:51:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.726 22:51:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.726 22:51:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.726 22:51:51 -- accel/accel.sh@42 -- # jq -r . 00:06:19.726 [2024-07-24 22:51:51.886579] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:19.726 [2024-07-24 22:51:51.886649] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3039658 ] 00:06:19.726 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.726 [2024-07-24 22:51:51.956373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.726 [2024-07-24 22:51:51.991054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.106 22:51:53 -- accel/accel.sh@18 -- # out=' 00:06:21.106 SPDK Configuration: 00:06:21.106 Core mask: 0x1 00:06:21.106 00:06:21.106 Accel Perf Configuration: 00:06:21.106 Workload Type: xor 00:06:21.106 Source buffers: 2 00:06:21.106 Transfer size: 4096 bytes 00:06:21.106 Vector count 1 00:06:21.106 Module: software 00:06:21.106 Queue depth: 32 00:06:21.106 Allocate depth: 32 00:06:21.106 # threads/core: 1 00:06:21.106 Run time: 1 seconds 00:06:21.106 Verify: Yes 00:06:21.106 00:06:21.106 Running for 1 seconds... 00:06:21.106 00:06:21.106 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:21.106 ------------------------------------------------------------------------------------ 00:06:21.106 0,0 511680/s 1998 MiB/s 0 0 00:06:21.106 ==================================================================================== 00:06:21.106 Total 511680/s 1998 MiB/s 0 0' 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # IFS=: 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # read -r var val 00:06:21.106 22:51:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:21.106 22:51:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:21.106 22:51:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.106 22:51:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.106 22:51:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.106 22:51:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.106 22:51:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.106 22:51:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.106 22:51:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.106 22:51:53 -- accel/accel.sh@42 -- # jq -r . 00:06:21.106 [2024-07-24 22:51:53.180166] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:21.106 [2024-07-24 22:51:53.180234] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3039805 ] 00:06:21.106 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.106 [2024-07-24 22:51:53.250308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.106 [2024-07-24 22:51:53.284742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.106 22:51:53 -- accel/accel.sh@21 -- # val= 00:06:21.106 22:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # IFS=: 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # read -r var val 00:06:21.106 22:51:53 -- accel/accel.sh@21 -- # val= 00:06:21.106 22:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # IFS=: 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # read -r var val 00:06:21.106 22:51:53 -- accel/accel.sh@21 -- # val=0x1 00:06:21.106 22:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # IFS=: 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # read -r var val 00:06:21.106 22:51:53 -- accel/accel.sh@21 -- # val= 00:06:21.106 22:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # IFS=: 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # read -r var val 00:06:21.106 22:51:53 -- accel/accel.sh@21 -- # val= 00:06:21.106 22:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # IFS=: 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # read -r var val 00:06:21.106 22:51:53 -- accel/accel.sh@21 -- # val=xor 00:06:21.106 22:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.106 22:51:53 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # IFS=: 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # read -r var val 00:06:21.106 22:51:53 -- accel/accel.sh@21 -- # val=2 00:06:21.106 22:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # IFS=: 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # read -r var val 00:06:21.106 22:51:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:21.106 22:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # IFS=: 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # read -r var val 00:06:21.106 22:51:53 -- accel/accel.sh@21 -- # val= 00:06:21.106 22:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # IFS=: 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # read -r var val 00:06:21.106 22:51:53 -- accel/accel.sh@21 -- # val=software 00:06:21.106 22:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.106 22:51:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # IFS=: 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # read -r var val 00:06:21.106 22:51:53 -- accel/accel.sh@21 -- # val=32 00:06:21.106 22:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # IFS=: 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # read -r var val 00:06:21.106 22:51:53 -- accel/accel.sh@21 -- # val=32 00:06:21.106 22:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # IFS=: 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # read -r var val 00:06:21.106 22:51:53 -- accel/accel.sh@21 -- # val=1 00:06:21.106 22:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # IFS=: 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # read -r var val 00:06:21.106 22:51:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:21.106 22:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # IFS=: 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # read -r var val 00:06:21.106 22:51:53 -- accel/accel.sh@21 -- # val=Yes 00:06:21.106 22:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # IFS=: 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # read -r var val 00:06:21.106 22:51:53 -- accel/accel.sh@21 -- # val= 00:06:21.106 22:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # IFS=: 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # read -r var val 00:06:21.106 22:51:53 -- accel/accel.sh@21 -- # val= 00:06:21.106 22:51:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # IFS=: 00:06:21.106 22:51:53 -- accel/accel.sh@20 -- # read -r var val 00:06:22.044 22:51:54 -- accel/accel.sh@21 -- # val= 00:06:22.044 22:51:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.044 22:51:54 -- accel/accel.sh@20 -- # IFS=: 00:06:22.044 22:51:54 -- accel/accel.sh@20 -- # read -r var val 00:06:22.044 22:51:54 -- accel/accel.sh@21 -- # val= 00:06:22.044 22:51:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.044 22:51:54 -- accel/accel.sh@20 -- # IFS=: 00:06:22.044 22:51:54 -- accel/accel.sh@20 -- # read -r var val 00:06:22.044 22:51:54 -- accel/accel.sh@21 -- # val= 00:06:22.044 22:51:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.044 22:51:54 -- accel/accel.sh@20 -- # IFS=: 00:06:22.044 22:51:54 -- accel/accel.sh@20 -- # read -r var val 00:06:22.044 22:51:54 -- accel/accel.sh@21 -- # val= 00:06:22.044 22:51:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.044 22:51:54 -- accel/accel.sh@20 -- # IFS=: 00:06:22.044 22:51:54 -- accel/accel.sh@20 -- # read -r var val 00:06:22.044 22:51:54 -- accel/accel.sh@21 -- # val= 00:06:22.044 22:51:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.044 22:51:54 -- accel/accel.sh@20 -- # IFS=: 00:06:22.044 22:51:54 -- accel/accel.sh@20 -- # read -r var val 00:06:22.044 22:51:54 -- accel/accel.sh@21 -- # val= 00:06:22.044 22:51:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.044 22:51:54 -- accel/accel.sh@20 -- # IFS=: 00:06:22.044 22:51:54 -- accel/accel.sh@20 -- # read -r var val 00:06:22.044 22:51:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:22.044 22:51:54 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:22.044 22:51:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.044 00:06:22.044 real 0m2.593s 00:06:22.044 user 0m2.351s 00:06:22.044 sys 0m0.252s 00:06:22.044 22:51:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.044 22:51:54 -- common/autotest_common.sh@10 -- # set +x 00:06:22.044 ************************************ 00:06:22.044 END TEST accel_xor 00:06:22.044 ************************************ 00:06:22.304 22:51:54 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:22.304 22:51:54 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:22.304 22:51:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:22.304 22:51:54 -- common/autotest_common.sh@10 -- # set +x 00:06:22.304 ************************************ 00:06:22.304 START TEST accel_xor 00:06:22.304 ************************************ 00:06:22.304 22:51:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:06:22.304 22:51:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.304 22:51:54 -- accel/accel.sh@17 -- # local accel_module 00:06:22.304 22:51:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:22.304 22:51:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:22.304 22:51:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.304 22:51:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.304 22:51:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.304 22:51:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.304 22:51:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.304 22:51:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.304 22:51:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.304 22:51:54 -- accel/accel.sh@42 -- # jq -r . 00:06:22.304 [2024-07-24 22:51:54.532072] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:22.304 [2024-07-24 22:51:54.532141] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3040067 ] 00:06:22.304 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.304 [2024-07-24 22:51:54.602719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.304 [2024-07-24 22:51:54.637338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.684 22:51:55 -- accel/accel.sh@18 -- # out=' 00:06:23.684 SPDK Configuration: 00:06:23.684 Core mask: 0x1 00:06:23.684 00:06:23.684 Accel Perf Configuration: 00:06:23.684 Workload Type: xor 00:06:23.684 Source buffers: 3 00:06:23.684 Transfer size: 4096 bytes 00:06:23.684 Vector count 1 00:06:23.684 Module: software 00:06:23.684 Queue depth: 32 00:06:23.684 Allocate depth: 32 00:06:23.684 # threads/core: 1 00:06:23.684 Run time: 1 seconds 00:06:23.684 Verify: Yes 00:06:23.684 00:06:23.684 Running for 1 seconds... 00:06:23.684 00:06:23.684 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:23.684 ------------------------------------------------------------------------------------ 00:06:23.684 0,0 480384/s 1876 MiB/s 0 0 00:06:23.684 ==================================================================================== 00:06:23.684 Total 480384/s 1876 MiB/s 0 0' 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # IFS=: 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # read -r var val 00:06:23.684 22:51:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:23.684 22:51:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:23.684 22:51:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.684 22:51:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.684 22:51:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.684 22:51:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.684 22:51:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.684 22:51:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.684 22:51:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.684 22:51:55 -- accel/accel.sh@42 -- # jq -r . 00:06:23.684 [2024-07-24 22:51:55.830217] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:23.684 [2024-07-24 22:51:55.830295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3040338 ] 00:06:23.684 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.684 [2024-07-24 22:51:55.902935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.684 [2024-07-24 22:51:55.936667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.684 22:51:55 -- accel/accel.sh@21 -- # val= 00:06:23.684 22:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # IFS=: 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # read -r var val 00:06:23.684 22:51:55 -- accel/accel.sh@21 -- # val= 00:06:23.684 22:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # IFS=: 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # read -r var val 00:06:23.684 22:51:55 -- accel/accel.sh@21 -- # val=0x1 00:06:23.684 22:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # IFS=: 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # read -r var val 00:06:23.684 22:51:55 -- accel/accel.sh@21 -- # val= 00:06:23.684 22:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # IFS=: 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # read -r var val 00:06:23.684 22:51:55 -- accel/accel.sh@21 -- # val= 00:06:23.684 22:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # IFS=: 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # read -r var val 00:06:23.684 22:51:55 -- accel/accel.sh@21 -- # val=xor 00:06:23.684 22:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.684 22:51:55 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # IFS=: 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # read -r var val 00:06:23.684 22:51:55 -- accel/accel.sh@21 -- # val=3 00:06:23.684 22:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # IFS=: 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # read -r var val 00:06:23.684 22:51:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:23.684 22:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # IFS=: 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # read -r var val 00:06:23.684 22:51:55 -- accel/accel.sh@21 -- # val= 00:06:23.684 22:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # IFS=: 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # read -r var val 00:06:23.684 22:51:55 -- accel/accel.sh@21 -- # val=software 00:06:23.684 22:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.684 22:51:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # IFS=: 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # read -r var val 00:06:23.684 22:51:55 -- accel/accel.sh@21 -- # val=32 00:06:23.684 22:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # IFS=: 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # read -r var val 00:06:23.684 22:51:55 -- accel/accel.sh@21 -- # val=32 00:06:23.684 22:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # IFS=: 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # read -r var val 00:06:23.684 22:51:55 -- accel/accel.sh@21 -- # val=1 00:06:23.684 22:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # IFS=: 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # read -r var val 00:06:23.684 22:51:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:23.684 22:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # IFS=: 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # read -r var val 00:06:23.684 22:51:55 -- accel/accel.sh@21 -- # val=Yes 00:06:23.684 22:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # IFS=: 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # read -r var val 00:06:23.684 22:51:55 -- accel/accel.sh@21 -- # val= 00:06:23.684 22:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # IFS=: 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # read -r var val 00:06:23.684 22:51:55 -- accel/accel.sh@21 -- # val= 00:06:23.684 22:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # IFS=: 00:06:23.684 22:51:55 -- accel/accel.sh@20 -- # read -r var val 00:06:25.093 22:51:57 -- accel/accel.sh@21 -- # val= 00:06:25.093 22:51:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.093 22:51:57 -- accel/accel.sh@20 -- # IFS=: 00:06:25.093 22:51:57 -- accel/accel.sh@20 -- # read -r var val 00:06:25.093 22:51:57 -- accel/accel.sh@21 -- # val= 00:06:25.093 22:51:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.093 22:51:57 -- accel/accel.sh@20 -- # IFS=: 00:06:25.093 22:51:57 -- accel/accel.sh@20 -- # read -r var val 00:06:25.093 22:51:57 -- accel/accel.sh@21 -- # val= 00:06:25.093 22:51:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.093 22:51:57 -- accel/accel.sh@20 -- # IFS=: 00:06:25.093 22:51:57 -- accel/accel.sh@20 -- # read -r var val 00:06:25.093 22:51:57 -- accel/accel.sh@21 -- # val= 00:06:25.093 22:51:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.094 22:51:57 -- accel/accel.sh@20 -- # IFS=: 00:06:25.094 22:51:57 -- accel/accel.sh@20 -- # read -r var val 00:06:25.094 22:51:57 -- accel/accel.sh@21 -- # val= 00:06:25.094 22:51:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.094 22:51:57 -- accel/accel.sh@20 -- # IFS=: 00:06:25.094 22:51:57 -- accel/accel.sh@20 -- # read -r var val 00:06:25.094 22:51:57 -- accel/accel.sh@21 -- # val= 00:06:25.094 22:51:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.094 22:51:57 -- accel/accel.sh@20 -- # IFS=: 00:06:25.094 22:51:57 -- accel/accel.sh@20 -- # read -r var val 00:06:25.094 22:51:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:25.094 22:51:57 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:25.094 22:51:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.094 00:06:25.094 real 0m2.605s 00:06:25.094 user 0m2.347s 00:06:25.094 sys 0m0.266s 00:06:25.094 22:51:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.094 22:51:57 -- common/autotest_common.sh@10 -- # set +x 00:06:25.094 ************************************ 00:06:25.094 END TEST accel_xor 00:06:25.094 ************************************ 00:06:25.094 22:51:57 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:25.094 22:51:57 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:25.094 22:51:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:25.094 22:51:57 -- common/autotest_common.sh@10 -- # set +x 00:06:25.094 ************************************ 00:06:25.094 START TEST accel_dif_verify 00:06:25.094 ************************************ 00:06:25.094 22:51:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:06:25.094 22:51:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.094 22:51:57 -- accel/accel.sh@17 -- # local accel_module 00:06:25.094 22:51:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:25.094 22:51:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:25.094 22:51:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.094 22:51:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.094 22:51:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.094 22:51:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.094 22:51:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.094 22:51:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.094 22:51:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.094 22:51:57 -- accel/accel.sh@42 -- # jq -r . 00:06:25.094 [2024-07-24 22:51:57.182550] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:25.094 [2024-07-24 22:51:57.182620] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3040617 ] 00:06:25.094 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.094 [2024-07-24 22:51:57.254305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.094 [2024-07-24 22:51:57.288851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.032 22:51:58 -- accel/accel.sh@18 -- # out=' 00:06:26.032 SPDK Configuration: 00:06:26.032 Core mask: 0x1 00:06:26.032 00:06:26.032 Accel Perf Configuration: 00:06:26.032 Workload Type: dif_verify 00:06:26.032 Vector size: 4096 bytes 00:06:26.032 Transfer size: 4096 bytes 00:06:26.032 Block size: 512 bytes 00:06:26.032 Metadata size: 8 bytes 00:06:26.032 Vector count 1 00:06:26.032 Module: software 00:06:26.032 Queue depth: 32 00:06:26.032 Allocate depth: 32 00:06:26.032 # threads/core: 1 00:06:26.032 Run time: 1 seconds 00:06:26.032 Verify: No 00:06:26.033 00:06:26.033 Running for 1 seconds... 00:06:26.033 00:06:26.033 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:26.033 ------------------------------------------------------------------------------------ 00:06:26.033 0,0 135232/s 536 MiB/s 0 0 00:06:26.033 ==================================================================================== 00:06:26.033 Total 135232/s 528 MiB/s 0 0' 00:06:26.033 22:51:58 -- accel/accel.sh@20 -- # IFS=: 00:06:26.033 22:51:58 -- accel/accel.sh@20 -- # read -r var val 00:06:26.033 22:51:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:26.033 22:51:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:26.033 22:51:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.033 22:51:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.033 22:51:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.033 22:51:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.033 22:51:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.033 22:51:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.033 22:51:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.033 22:51:58 -- accel/accel.sh@42 -- # jq -r . 00:06:26.293 [2024-07-24 22:51:58.479029] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:26.293 [2024-07-24 22:51:58.479098] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3040883 ] 00:06:26.293 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.293 [2024-07-24 22:51:58.548996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.293 [2024-07-24 22:51:58.582810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.293 22:51:58 -- accel/accel.sh@21 -- # val= 00:06:26.293 22:51:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # IFS=: 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # read -r var val 00:06:26.293 22:51:58 -- accel/accel.sh@21 -- # val= 00:06:26.293 22:51:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # IFS=: 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # read -r var val 00:06:26.293 22:51:58 -- accel/accel.sh@21 -- # val=0x1 00:06:26.293 22:51:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # IFS=: 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # read -r var val 00:06:26.293 22:51:58 -- accel/accel.sh@21 -- # val= 00:06:26.293 22:51:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # IFS=: 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # read -r var val 00:06:26.293 22:51:58 -- accel/accel.sh@21 -- # val= 00:06:26.293 22:51:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # IFS=: 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # read -r var val 00:06:26.293 22:51:58 -- accel/accel.sh@21 -- # val=dif_verify 00:06:26.293 22:51:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.293 22:51:58 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # IFS=: 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # read -r var val 00:06:26.293 22:51:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:26.293 22:51:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # IFS=: 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # read -r var val 00:06:26.293 22:51:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:26.293 22:51:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # IFS=: 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # read -r var val 00:06:26.293 22:51:58 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:26.293 22:51:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # IFS=: 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # read -r var val 00:06:26.293 22:51:58 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:26.293 22:51:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # IFS=: 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # read -r var val 00:06:26.293 22:51:58 -- accel/accel.sh@21 -- # val= 00:06:26.293 22:51:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # IFS=: 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # read -r var val 00:06:26.293 22:51:58 -- accel/accel.sh@21 -- # val=software 00:06:26.293 22:51:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.293 22:51:58 -- accel/accel.sh@23 -- # accel_module=software 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # IFS=: 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # read -r var val 00:06:26.293 22:51:58 -- accel/accel.sh@21 -- # val=32 00:06:26.293 22:51:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # IFS=: 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # read -r var val 00:06:26.293 22:51:58 -- accel/accel.sh@21 -- # val=32 00:06:26.293 22:51:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # IFS=: 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # read -r var val 00:06:26.293 22:51:58 -- accel/accel.sh@21 -- # val=1 00:06:26.293 22:51:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # IFS=: 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # read -r var val 00:06:26.293 22:51:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:26.293 22:51:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # IFS=: 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # read -r var val 00:06:26.293 22:51:58 -- accel/accel.sh@21 -- # val=No 00:06:26.293 22:51:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # IFS=: 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # read -r var val 00:06:26.293 22:51:58 -- accel/accel.sh@21 -- # val= 00:06:26.293 22:51:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # IFS=: 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # read -r var val 00:06:26.293 22:51:58 -- accel/accel.sh@21 -- # val= 00:06:26.293 22:51:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # IFS=: 00:06:26.293 22:51:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.672 22:51:59 -- accel/accel.sh@21 -- # val= 00:06:27.672 22:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.672 22:51:59 -- accel/accel.sh@20 -- # IFS=: 00:06:27.672 22:51:59 -- accel/accel.sh@20 -- # read -r var val 00:06:27.672 22:51:59 -- accel/accel.sh@21 -- # val= 00:06:27.673 22:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.673 22:51:59 -- accel/accel.sh@20 -- # IFS=: 00:06:27.673 22:51:59 -- accel/accel.sh@20 -- # read -r var val 00:06:27.673 22:51:59 -- accel/accel.sh@21 -- # val= 00:06:27.673 22:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.673 22:51:59 -- accel/accel.sh@20 -- # IFS=: 00:06:27.673 22:51:59 -- accel/accel.sh@20 -- # read -r var val 00:06:27.673 22:51:59 -- accel/accel.sh@21 -- # val= 00:06:27.673 22:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.673 22:51:59 -- accel/accel.sh@20 -- # IFS=: 00:06:27.673 22:51:59 -- accel/accel.sh@20 -- # read -r var val 00:06:27.673 22:51:59 -- accel/accel.sh@21 -- # val= 00:06:27.673 22:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.673 22:51:59 -- accel/accel.sh@20 -- # IFS=: 00:06:27.673 22:51:59 -- accel/accel.sh@20 -- # read -r var val 00:06:27.673 22:51:59 -- accel/accel.sh@21 -- # val= 00:06:27.673 22:51:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.673 22:51:59 -- accel/accel.sh@20 -- # IFS=: 00:06:27.673 22:51:59 -- accel/accel.sh@20 -- # read -r var val 00:06:27.673 22:51:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:27.673 22:51:59 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:27.673 22:51:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.673 00:06:27.673 real 0m2.600s 00:06:27.673 user 0m2.357s 00:06:27.673 sys 0m0.253s 00:06:27.673 22:51:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.673 22:51:59 -- common/autotest_common.sh@10 -- # set +x 00:06:27.673 ************************************ 00:06:27.673 END TEST accel_dif_verify 00:06:27.673 ************************************ 00:06:27.673 22:51:59 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:27.673 22:51:59 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:27.673 22:51:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.673 22:51:59 -- common/autotest_common.sh@10 -- # set +x 00:06:27.673 ************************************ 00:06:27.673 START TEST accel_dif_generate 00:06:27.673 ************************************ 00:06:27.673 22:51:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:06:27.673 22:51:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.673 22:51:59 -- accel/accel.sh@17 -- # local accel_module 00:06:27.673 22:51:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:27.673 22:51:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:27.673 22:51:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.673 22:51:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.673 22:51:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.673 22:51:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.673 22:51:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.673 22:51:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.673 22:51:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.673 22:51:59 -- accel/accel.sh@42 -- # jq -r . 00:06:27.673 [2024-07-24 22:51:59.825150] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:27.673 [2024-07-24 22:51:59.825237] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3041171 ] 00:06:27.673 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.673 [2024-07-24 22:51:59.894275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.673 [2024-07-24 22:51:59.928917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.052 22:52:01 -- accel/accel.sh@18 -- # out=' 00:06:29.052 SPDK Configuration: 00:06:29.052 Core mask: 0x1 00:06:29.052 00:06:29.052 Accel Perf Configuration: 00:06:29.052 Workload Type: dif_generate 00:06:29.052 Vector size: 4096 bytes 00:06:29.052 Transfer size: 4096 bytes 00:06:29.052 Block size: 512 bytes 00:06:29.052 Metadata size: 8 bytes 00:06:29.052 Vector count 1 00:06:29.052 Module: software 00:06:29.052 Queue depth: 32 00:06:29.052 Allocate depth: 32 00:06:29.052 # threads/core: 1 00:06:29.052 Run time: 1 seconds 00:06:29.052 Verify: No 00:06:29.052 00:06:29.052 Running for 1 seconds... 00:06:29.052 00:06:29.052 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:29.052 ------------------------------------------------------------------------------------ 00:06:29.052 0,0 164768/s 653 MiB/s 0 0 00:06:29.052 ==================================================================================== 00:06:29.052 Total 164768/s 643 MiB/s 0 0' 00:06:29.052 22:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:29.052 22:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.052 22:52:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:29.052 22:52:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:29.052 22:52:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.052 22:52:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.052 22:52:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.052 22:52:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.052 22:52:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.052 22:52:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.052 22:52:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.052 22:52:01 -- accel/accel.sh@42 -- # jq -r . 00:06:29.052 [2024-07-24 22:52:01.119244] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:29.052 [2024-07-24 22:52:01.119312] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3041348 ] 00:06:29.052 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.052 [2024-07-24 22:52:01.189290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.052 [2024-07-24 22:52:01.223182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.052 22:52:01 -- accel/accel.sh@21 -- # val= 00:06:29.052 22:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.052 22:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:29.052 22:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.052 22:52:01 -- accel/accel.sh@21 -- # val= 00:06:29.052 22:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.052 22:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:29.052 22:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.052 22:52:01 -- accel/accel.sh@21 -- # val=0x1 00:06:29.052 22:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.052 22:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:29.052 22:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.052 22:52:01 -- accel/accel.sh@21 -- # val= 00:06:29.052 22:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.052 22:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.053 22:52:01 -- accel/accel.sh@21 -- # val= 00:06:29.053 22:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.053 22:52:01 -- accel/accel.sh@21 -- # val=dif_generate 00:06:29.053 22:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.053 22:52:01 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.053 22:52:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:29.053 22:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.053 22:52:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:29.053 22:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.053 22:52:01 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:29.053 22:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.053 22:52:01 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:29.053 22:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.053 22:52:01 -- accel/accel.sh@21 -- # val= 00:06:29.053 22:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.053 22:52:01 -- accel/accel.sh@21 -- # val=software 00:06:29.053 22:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.053 22:52:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.053 22:52:01 -- accel/accel.sh@21 -- # val=32 00:06:29.053 22:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.053 22:52:01 -- accel/accel.sh@21 -- # val=32 00:06:29.053 22:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.053 22:52:01 -- accel/accel.sh@21 -- # val=1 00:06:29.053 22:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.053 22:52:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:29.053 22:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.053 22:52:01 -- accel/accel.sh@21 -- # val=No 00:06:29.053 22:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.053 22:52:01 -- accel/accel.sh@21 -- # val= 00:06:29.053 22:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.053 22:52:01 -- accel/accel.sh@21 -- # val= 00:06:29.053 22:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:29.053 22:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.991 22:52:02 -- accel/accel.sh@21 -- # val= 00:06:29.991 22:52:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.991 22:52:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.991 22:52:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.991 22:52:02 -- accel/accel.sh@21 -- # val= 00:06:29.991 22:52:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.991 22:52:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.991 22:52:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.991 22:52:02 -- accel/accel.sh@21 -- # val= 00:06:29.991 22:52:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.991 22:52:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.991 22:52:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.991 22:52:02 -- accel/accel.sh@21 -- # val= 00:06:29.991 22:52:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.991 22:52:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.991 22:52:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.991 22:52:02 -- accel/accel.sh@21 -- # val= 00:06:29.991 22:52:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.991 22:52:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.991 22:52:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.991 22:52:02 -- accel/accel.sh@21 -- # val= 00:06:29.991 22:52:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.991 22:52:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.991 22:52:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.991 22:52:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.991 22:52:02 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:29.991 22:52:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.991 00:06:29.991 real 0m2.596s 00:06:29.991 user 0m2.346s 00:06:29.991 sys 0m0.260s 00:06:29.991 22:52:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.991 22:52:02 -- common/autotest_common.sh@10 -- # set +x 00:06:29.991 ************************************ 00:06:29.992 END TEST accel_dif_generate 00:06:29.992 ************************************ 00:06:30.251 22:52:02 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:30.251 22:52:02 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:30.251 22:52:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.251 22:52:02 -- common/autotest_common.sh@10 -- # set +x 00:06:30.251 ************************************ 00:06:30.251 START TEST accel_dif_generate_copy 00:06:30.251 ************************************ 00:06:30.251 22:52:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:06:30.251 22:52:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.251 22:52:02 -- accel/accel.sh@17 -- # local accel_module 00:06:30.251 22:52:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:30.251 22:52:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:30.251 22:52:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.251 22:52:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.251 22:52:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.251 22:52:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.251 22:52:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.251 22:52:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.251 22:52:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.251 22:52:02 -- accel/accel.sh@42 -- # jq -r . 00:06:30.251 [2024-07-24 22:52:02.472582] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:30.251 [2024-07-24 22:52:02.472670] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3041535 ] 00:06:30.251 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.251 [2024-07-24 22:52:02.542813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.251 [2024-07-24 22:52:02.577484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.630 22:52:03 -- accel/accel.sh@18 -- # out=' 00:06:31.630 SPDK Configuration: 00:06:31.630 Core mask: 0x1 00:06:31.630 00:06:31.630 Accel Perf Configuration: 00:06:31.630 Workload Type: dif_generate_copy 00:06:31.630 Vector size: 4096 bytes 00:06:31.630 Transfer size: 4096 bytes 00:06:31.630 Vector count 1 00:06:31.630 Module: software 00:06:31.630 Queue depth: 32 00:06:31.630 Allocate depth: 32 00:06:31.630 # threads/core: 1 00:06:31.630 Run time: 1 seconds 00:06:31.630 Verify: No 00:06:31.630 00:06:31.630 Running for 1 seconds... 00:06:31.630 00:06:31.630 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:31.630 ------------------------------------------------------------------------------------ 00:06:31.630 0,0 130944/s 519 MiB/s 0 0 00:06:31.630 ==================================================================================== 00:06:31.630 Total 130944/s 511 MiB/s 0 0' 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # IFS=: 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # read -r var val 00:06:31.630 22:52:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:31.630 22:52:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:31.630 22:52:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.630 22:52:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.630 22:52:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.630 22:52:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.630 22:52:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.630 22:52:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.630 22:52:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.630 22:52:03 -- accel/accel.sh@42 -- # jq -r . 00:06:31.630 [2024-07-24 22:52:03.769343] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:31.630 [2024-07-24 22:52:03.769411] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3041739 ] 00:06:31.630 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.630 [2024-07-24 22:52:03.838667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.630 [2024-07-24 22:52:03.872623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.630 22:52:03 -- accel/accel.sh@21 -- # val= 00:06:31.630 22:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # IFS=: 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # read -r var val 00:06:31.630 22:52:03 -- accel/accel.sh@21 -- # val= 00:06:31.630 22:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # IFS=: 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # read -r var val 00:06:31.630 22:52:03 -- accel/accel.sh@21 -- # val=0x1 00:06:31.630 22:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # IFS=: 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # read -r var val 00:06:31.630 22:52:03 -- accel/accel.sh@21 -- # val= 00:06:31.630 22:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # IFS=: 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # read -r var val 00:06:31.630 22:52:03 -- accel/accel.sh@21 -- # val= 00:06:31.630 22:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # IFS=: 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # read -r var val 00:06:31.630 22:52:03 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:31.630 22:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.630 22:52:03 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # IFS=: 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # read -r var val 00:06:31.630 22:52:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:31.630 22:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # IFS=: 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # read -r var val 00:06:31.630 22:52:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:31.630 22:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # IFS=: 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # read -r var val 00:06:31.630 22:52:03 -- accel/accel.sh@21 -- # val= 00:06:31.630 22:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # IFS=: 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # read -r var val 00:06:31.630 22:52:03 -- accel/accel.sh@21 -- # val=software 00:06:31.630 22:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.630 22:52:03 -- accel/accel.sh@23 -- # accel_module=software 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # IFS=: 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # read -r var val 00:06:31.630 22:52:03 -- accel/accel.sh@21 -- # val=32 00:06:31.630 22:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # IFS=: 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # read -r var val 00:06:31.630 22:52:03 -- accel/accel.sh@21 -- # val=32 00:06:31.630 22:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # IFS=: 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # read -r var val 00:06:31.630 22:52:03 -- accel/accel.sh@21 -- # val=1 00:06:31.630 22:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # IFS=: 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # read -r var val 00:06:31.630 22:52:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:31.630 22:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # IFS=: 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # read -r var val 00:06:31.630 22:52:03 -- accel/accel.sh@21 -- # val=No 00:06:31.630 22:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # IFS=: 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # read -r var val 00:06:31.630 22:52:03 -- accel/accel.sh@21 -- # val= 00:06:31.630 22:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # IFS=: 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # read -r var val 00:06:31.630 22:52:03 -- accel/accel.sh@21 -- # val= 00:06:31.630 22:52:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # IFS=: 00:06:31.630 22:52:03 -- accel/accel.sh@20 -- # read -r var val 00:06:33.010 22:52:05 -- accel/accel.sh@21 -- # val= 00:06:33.010 22:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.010 22:52:05 -- accel/accel.sh@20 -- # IFS=: 00:06:33.010 22:52:05 -- accel/accel.sh@20 -- # read -r var val 00:06:33.010 22:52:05 -- accel/accel.sh@21 -- # val= 00:06:33.010 22:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.010 22:52:05 -- accel/accel.sh@20 -- # IFS=: 00:06:33.010 22:52:05 -- accel/accel.sh@20 -- # read -r var val 00:06:33.010 22:52:05 -- accel/accel.sh@21 -- # val= 00:06:33.010 22:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.010 22:52:05 -- accel/accel.sh@20 -- # IFS=: 00:06:33.010 22:52:05 -- accel/accel.sh@20 -- # read -r var val 00:06:33.010 22:52:05 -- accel/accel.sh@21 -- # val= 00:06:33.010 22:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.010 22:52:05 -- accel/accel.sh@20 -- # IFS=: 00:06:33.010 22:52:05 -- accel/accel.sh@20 -- # read -r var val 00:06:33.010 22:52:05 -- accel/accel.sh@21 -- # val= 00:06:33.010 22:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.010 22:52:05 -- accel/accel.sh@20 -- # IFS=: 00:06:33.010 22:52:05 -- accel/accel.sh@20 -- # read -r var val 00:06:33.010 22:52:05 -- accel/accel.sh@21 -- # val= 00:06:33.010 22:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.010 22:52:05 -- accel/accel.sh@20 -- # IFS=: 00:06:33.010 22:52:05 -- accel/accel.sh@20 -- # read -r var val 00:06:33.010 22:52:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:33.010 22:52:05 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:33.010 22:52:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.010 00:06:33.010 real 0m2.599s 00:06:33.010 user 0m2.359s 00:06:33.010 sys 0m0.248s 00:06:33.010 22:52:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.010 22:52:05 -- common/autotest_common.sh@10 -- # set +x 00:06:33.010 ************************************ 00:06:33.010 END TEST accel_dif_generate_copy 00:06:33.010 ************************************ 00:06:33.010 22:52:05 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:33.010 22:52:05 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.010 22:52:05 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:33.010 22:52:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:33.010 22:52:05 -- common/autotest_common.sh@10 -- # set +x 00:06:33.010 ************************************ 00:06:33.010 START TEST accel_comp 00:06:33.010 ************************************ 00:06:33.010 22:52:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.010 22:52:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.010 22:52:05 -- accel/accel.sh@17 -- # local accel_module 00:06:33.010 22:52:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.010 22:52:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.010 22:52:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.010 22:52:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.010 22:52:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.010 22:52:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.010 22:52:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.010 22:52:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.010 22:52:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.010 22:52:05 -- accel/accel.sh@42 -- # jq -r . 00:06:33.010 [2024-07-24 22:52:05.116360] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:33.010 [2024-07-24 22:52:05.116427] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3042023 ] 00:06:33.010 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.010 [2024-07-24 22:52:05.185255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.010 [2024-07-24 22:52:05.219957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.390 22:52:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:34.390 00:06:34.390 SPDK Configuration: 00:06:34.390 Core mask: 0x1 00:06:34.390 00:06:34.390 Accel Perf Configuration: 00:06:34.390 Workload Type: compress 00:06:34.390 Transfer size: 4096 bytes 00:06:34.390 Vector count 1 00:06:34.390 Module: software 00:06:34.390 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.390 Queue depth: 32 00:06:34.390 Allocate depth: 32 00:06:34.390 # threads/core: 1 00:06:34.390 Run time: 1 seconds 00:06:34.390 Verify: No 00:06:34.390 00:06:34.390 Running for 1 seconds... 00:06:34.390 00:06:34.390 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:34.390 ------------------------------------------------------------------------------------ 00:06:34.390 0,0 65472/s 272 MiB/s 0 0 00:06:34.390 ==================================================================================== 00:06:34.390 Total 65472/s 255 MiB/s 0 0' 00:06:34.390 22:52:06 -- accel/accel.sh@20 -- # IFS=: 00:06:34.390 22:52:06 -- accel/accel.sh@20 -- # read -r var val 00:06:34.390 22:52:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.390 22:52:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.390 22:52:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.390 22:52:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.390 22:52:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.390 22:52:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.390 22:52:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.390 22:52:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.390 22:52:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.390 22:52:06 -- accel/accel.sh@42 -- # jq -r . 00:06:34.390 [2024-07-24 22:52:06.414433] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:34.390 [2024-07-24 22:52:06.414516] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3042287 ] 00:06:34.390 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.390 [2024-07-24 22:52:06.484610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.390 [2024-07-24 22:52:06.518327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.390 22:52:06 -- accel/accel.sh@21 -- # val= 00:06:34.390 22:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.390 22:52:06 -- accel/accel.sh@20 -- # IFS=: 00:06:34.390 22:52:06 -- accel/accel.sh@20 -- # read -r var val 00:06:34.390 22:52:06 -- accel/accel.sh@21 -- # val= 00:06:34.390 22:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.390 22:52:06 -- accel/accel.sh@20 -- # IFS=: 00:06:34.390 22:52:06 -- accel/accel.sh@20 -- # read -r var val 00:06:34.390 22:52:06 -- accel/accel.sh@21 -- # val= 00:06:34.390 22:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.390 22:52:06 -- accel/accel.sh@20 -- # IFS=: 00:06:34.390 22:52:06 -- accel/accel.sh@20 -- # read -r var val 00:06:34.390 22:52:06 -- accel/accel.sh@21 -- # val=0x1 00:06:34.390 22:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.390 22:52:06 -- accel/accel.sh@20 -- # IFS=: 00:06:34.390 22:52:06 -- accel/accel.sh@20 -- # read -r var val 00:06:34.390 22:52:06 -- accel/accel.sh@21 -- # val= 00:06:34.390 22:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.390 22:52:06 -- accel/accel.sh@20 -- # IFS=: 00:06:34.390 22:52:06 -- accel/accel.sh@20 -- # read -r var val 00:06:34.390 22:52:06 -- accel/accel.sh@21 -- # val= 00:06:34.390 22:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.390 22:52:06 -- accel/accel.sh@20 -- # IFS=: 00:06:34.390 22:52:06 -- accel/accel.sh@20 -- # read -r var val 00:06:34.390 22:52:06 -- accel/accel.sh@21 -- # val=compress 00:06:34.390 22:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.390 22:52:06 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:34.390 22:52:06 -- accel/accel.sh@20 -- # IFS=: 00:06:34.390 22:52:06 -- accel/accel.sh@20 -- # read -r var val 00:06:34.390 22:52:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.390 22:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.390 22:52:06 -- accel/accel.sh@20 -- # IFS=: 00:06:34.390 22:52:06 -- accel/accel.sh@20 -- # read -r var val 00:06:34.390 22:52:06 -- accel/accel.sh@21 -- # val= 00:06:34.390 22:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.390 22:52:06 -- accel/accel.sh@20 -- # IFS=: 00:06:34.390 22:52:06 -- accel/accel.sh@20 -- # read -r var val 00:06:34.390 22:52:06 -- accel/accel.sh@21 -- # val=software 00:06:34.390 22:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.390 22:52:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.391 22:52:06 -- accel/accel.sh@20 -- # IFS=: 00:06:34.391 22:52:06 -- accel/accel.sh@20 -- # read -r var val 00:06:34.391 22:52:06 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.391 22:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.391 22:52:06 -- accel/accel.sh@20 -- # IFS=: 00:06:34.391 22:52:06 -- accel/accel.sh@20 -- # read -r var val 00:06:34.391 22:52:06 -- accel/accel.sh@21 -- # val=32 00:06:34.391 22:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.391 22:52:06 -- accel/accel.sh@20 -- # IFS=: 00:06:34.391 22:52:06 -- accel/accel.sh@20 -- # read -r var val 00:06:34.391 22:52:06 -- accel/accel.sh@21 -- # val=32 00:06:34.391 22:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.391 22:52:06 -- accel/accel.sh@20 -- # IFS=: 00:06:34.391 22:52:06 -- accel/accel.sh@20 -- # read -r var val 00:06:34.391 22:52:06 -- accel/accel.sh@21 -- # val=1 00:06:34.391 22:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.391 22:52:06 -- accel/accel.sh@20 -- # IFS=: 00:06:34.391 22:52:06 -- accel/accel.sh@20 -- # read -r var val 00:06:34.391 22:52:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.391 22:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.391 22:52:06 -- accel/accel.sh@20 -- # IFS=: 00:06:34.391 22:52:06 -- accel/accel.sh@20 -- # read -r var val 00:06:34.391 22:52:06 -- accel/accel.sh@21 -- # val=No 00:06:34.391 22:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.391 22:52:06 -- accel/accel.sh@20 -- # IFS=: 00:06:34.391 22:52:06 -- accel/accel.sh@20 -- # read -r var val 00:06:34.391 22:52:06 -- accel/accel.sh@21 -- # val= 00:06:34.391 22:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.391 22:52:06 -- accel/accel.sh@20 -- # IFS=: 00:06:34.391 22:52:06 -- accel/accel.sh@20 -- # read -r var val 00:06:34.391 22:52:06 -- accel/accel.sh@21 -- # val= 00:06:34.391 22:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.391 22:52:06 -- accel/accel.sh@20 -- # IFS=: 00:06:34.391 22:52:06 -- accel/accel.sh@20 -- # read -r var val 00:06:35.369 22:52:07 -- accel/accel.sh@21 -- # val= 00:06:35.369 22:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.369 22:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:35.369 22:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:35.369 22:52:07 -- accel/accel.sh@21 -- # val= 00:06:35.369 22:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.369 22:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:35.369 22:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:35.369 22:52:07 -- accel/accel.sh@21 -- # val= 00:06:35.369 22:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.369 22:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:35.369 22:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:35.369 22:52:07 -- accel/accel.sh@21 -- # val= 00:06:35.369 22:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.369 22:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:35.369 22:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:35.369 22:52:07 -- accel/accel.sh@21 -- # val= 00:06:35.369 22:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.369 22:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:35.369 22:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:35.369 22:52:07 -- accel/accel.sh@21 -- # val= 00:06:35.369 22:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.369 22:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:35.369 22:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:35.369 22:52:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.369 22:52:07 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:35.369 22:52:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.369 00:06:35.369 real 0m2.602s 00:06:35.369 user 0m2.341s 00:06:35.369 sys 0m0.271s 00:06:35.369 22:52:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.369 22:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:35.369 ************************************ 00:06:35.369 END TEST accel_comp 00:06:35.369 ************************************ 00:06:35.369 22:52:07 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.369 22:52:07 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:35.369 22:52:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.369 22:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:35.369 ************************************ 00:06:35.369 START TEST accel_decomp 00:06:35.369 ************************************ 00:06:35.369 22:52:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.369 22:52:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.369 22:52:07 -- accel/accel.sh@17 -- # local accel_module 00:06:35.369 22:52:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.369 22:52:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.369 22:52:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.369 22:52:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.369 22:52:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.369 22:52:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.369 22:52:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.369 22:52:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.369 22:52:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.369 22:52:07 -- accel/accel.sh@42 -- # jq -r . 00:06:35.369 [2024-07-24 22:52:07.765511] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:35.369 [2024-07-24 22:52:07.765582] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3042568 ] 00:06:35.629 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.629 [2024-07-24 22:52:07.835346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.629 [2024-07-24 22:52:07.870091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.009 22:52:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:37.009 00:06:37.009 SPDK Configuration: 00:06:37.009 Core mask: 0x1 00:06:37.009 00:06:37.009 Accel Perf Configuration: 00:06:37.009 Workload Type: decompress 00:06:37.009 Transfer size: 4096 bytes 00:06:37.009 Vector count 1 00:06:37.009 Module: software 00:06:37.009 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.009 Queue depth: 32 00:06:37.009 Allocate depth: 32 00:06:37.009 # threads/core: 1 00:06:37.009 Run time: 1 seconds 00:06:37.009 Verify: Yes 00:06:37.009 00:06:37.009 Running for 1 seconds... 00:06:37.009 00:06:37.009 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.009 ------------------------------------------------------------------------------------ 00:06:37.009 0,0 87616/s 161 MiB/s 0 0 00:06:37.009 ==================================================================================== 00:06:37.009 Total 87616/s 342 MiB/s 0 0' 00:06:37.009 22:52:09 -- accel/accel.sh@20 -- # IFS=: 00:06:37.009 22:52:09 -- accel/accel.sh@20 -- # read -r var val 00:06:37.009 22:52:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.009 22:52:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.009 22:52:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.009 22:52:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.009 22:52:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.009 22:52:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.009 22:52:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.009 22:52:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.009 22:52:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.009 22:52:09 -- accel/accel.sh@42 -- # jq -r . 00:06:37.009 [2024-07-24 22:52:09.065697] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:37.009 [2024-07-24 22:52:09.065770] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3042840 ] 00:06:37.009 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.009 [2024-07-24 22:52:09.136887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.009 [2024-07-24 22:52:09.170652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.009 22:52:09 -- accel/accel.sh@21 -- # val= 00:06:37.009 22:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.009 22:52:09 -- accel/accel.sh@20 -- # IFS=: 00:06:37.009 22:52:09 -- accel/accel.sh@20 -- # read -r var val 00:06:37.009 22:52:09 -- accel/accel.sh@21 -- # val= 00:06:37.009 22:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # IFS=: 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # read -r var val 00:06:37.010 22:52:09 -- accel/accel.sh@21 -- # val= 00:06:37.010 22:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # IFS=: 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # read -r var val 00:06:37.010 22:52:09 -- accel/accel.sh@21 -- # val=0x1 00:06:37.010 22:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # IFS=: 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # read -r var val 00:06:37.010 22:52:09 -- accel/accel.sh@21 -- # val= 00:06:37.010 22:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # IFS=: 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # read -r var val 00:06:37.010 22:52:09 -- accel/accel.sh@21 -- # val= 00:06:37.010 22:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # IFS=: 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # read -r var val 00:06:37.010 22:52:09 -- accel/accel.sh@21 -- # val=decompress 00:06:37.010 22:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.010 22:52:09 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # IFS=: 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # read -r var val 00:06:37.010 22:52:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:37.010 22:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # IFS=: 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # read -r var val 00:06:37.010 22:52:09 -- accel/accel.sh@21 -- # val= 00:06:37.010 22:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # IFS=: 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # read -r var val 00:06:37.010 22:52:09 -- accel/accel.sh@21 -- # val=software 00:06:37.010 22:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.010 22:52:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # IFS=: 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # read -r var val 00:06:37.010 22:52:09 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.010 22:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # IFS=: 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # read -r var val 00:06:37.010 22:52:09 -- accel/accel.sh@21 -- # val=32 00:06:37.010 22:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # IFS=: 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # read -r var val 00:06:37.010 22:52:09 -- accel/accel.sh@21 -- # val=32 00:06:37.010 22:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # IFS=: 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # read -r var val 00:06:37.010 22:52:09 -- accel/accel.sh@21 -- # val=1 00:06:37.010 22:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # IFS=: 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # read -r var val 00:06:37.010 22:52:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:37.010 22:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # IFS=: 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # read -r var val 00:06:37.010 22:52:09 -- accel/accel.sh@21 -- # val=Yes 00:06:37.010 22:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # IFS=: 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # read -r var val 00:06:37.010 22:52:09 -- accel/accel.sh@21 -- # val= 00:06:37.010 22:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # IFS=: 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # read -r var val 00:06:37.010 22:52:09 -- accel/accel.sh@21 -- # val= 00:06:37.010 22:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # IFS=: 00:06:37.010 22:52:09 -- accel/accel.sh@20 -- # read -r var val 00:06:37.949 22:52:10 -- accel/accel.sh@21 -- # val= 00:06:37.949 22:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.949 22:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:37.949 22:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:37.949 22:52:10 -- accel/accel.sh@21 -- # val= 00:06:37.949 22:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.949 22:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:37.949 22:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:37.949 22:52:10 -- accel/accel.sh@21 -- # val= 00:06:37.949 22:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.949 22:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:37.949 22:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:37.949 22:52:10 -- accel/accel.sh@21 -- # val= 00:06:37.949 22:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.949 22:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:37.949 22:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:37.949 22:52:10 -- accel/accel.sh@21 -- # val= 00:06:37.949 22:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.949 22:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:37.949 22:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:37.949 22:52:10 -- accel/accel.sh@21 -- # val= 00:06:37.949 22:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.949 22:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:37.949 22:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:37.949 22:52:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:37.950 22:52:10 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:37.950 22:52:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.950 00:06:37.950 real 0m2.607s 00:06:37.950 user 0m2.357s 00:06:37.950 sys 0m0.259s 00:06:37.950 22:52:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.950 22:52:10 -- common/autotest_common.sh@10 -- # set +x 00:06:37.950 ************************************ 00:06:37.950 END TEST accel_decomp 00:06:37.950 ************************************ 00:06:38.209 22:52:10 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:38.209 22:52:10 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:38.209 22:52:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.209 22:52:10 -- common/autotest_common.sh@10 -- # set +x 00:06:38.209 ************************************ 00:06:38.209 START TEST accel_decmop_full 00:06:38.209 ************************************ 00:06:38.209 22:52:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:38.209 22:52:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.209 22:52:10 -- accel/accel.sh@17 -- # local accel_module 00:06:38.209 22:52:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:38.209 22:52:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:38.209 22:52:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.209 22:52:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.209 22:52:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.209 22:52:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.209 22:52:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.209 22:52:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.209 22:52:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.209 22:52:10 -- accel/accel.sh@42 -- # jq -r . 00:06:38.209 [2024-07-24 22:52:10.413614] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:38.209 [2024-07-24 22:52:10.413683] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3043066 ] 00:06:38.209 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.209 [2024-07-24 22:52:10.483386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.209 [2024-07-24 22:52:10.518666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.589 22:52:11 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:39.589 00:06:39.589 SPDK Configuration: 00:06:39.589 Core mask: 0x1 00:06:39.589 00:06:39.589 Accel Perf Configuration: 00:06:39.589 Workload Type: decompress 00:06:39.589 Transfer size: 111250 bytes 00:06:39.589 Vector count 1 00:06:39.589 Module: software 00:06:39.589 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:39.589 Queue depth: 32 00:06:39.589 Allocate depth: 32 00:06:39.589 # threads/core: 1 00:06:39.589 Run time: 1 seconds 00:06:39.589 Verify: Yes 00:06:39.589 00:06:39.589 Running for 1 seconds... 00:06:39.589 00:06:39.589 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:39.589 ------------------------------------------------------------------------------------ 00:06:39.589 0,0 5856/s 241 MiB/s 0 0 00:06:39.589 ==================================================================================== 00:06:39.589 Total 5856/s 621 MiB/s 0 0' 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:39.589 22:52:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:39.589 22:52:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:39.589 22:52:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.589 22:52:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.589 22:52:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.589 22:52:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.589 22:52:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.589 22:52:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.589 22:52:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.589 22:52:11 -- accel/accel.sh@42 -- # jq -r . 00:06:39.589 [2024-07-24 22:52:11.725065] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:39.589 [2024-07-24 22:52:11.725131] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3043214 ] 00:06:39.589 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.589 [2024-07-24 22:52:11.795983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.589 [2024-07-24 22:52:11.830135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.589 22:52:11 -- accel/accel.sh@21 -- # val= 00:06:39.589 22:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:39.589 22:52:11 -- accel/accel.sh@21 -- # val= 00:06:39.589 22:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:39.589 22:52:11 -- accel/accel.sh@21 -- # val= 00:06:39.589 22:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:39.589 22:52:11 -- accel/accel.sh@21 -- # val=0x1 00:06:39.589 22:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:39.589 22:52:11 -- accel/accel.sh@21 -- # val= 00:06:39.589 22:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:39.589 22:52:11 -- accel/accel.sh@21 -- # val= 00:06:39.589 22:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:39.589 22:52:11 -- accel/accel.sh@21 -- # val=decompress 00:06:39.589 22:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.589 22:52:11 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:39.589 22:52:11 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:39.589 22:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:39.589 22:52:11 -- accel/accel.sh@21 -- # val= 00:06:39.589 22:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:39.589 22:52:11 -- accel/accel.sh@21 -- # val=software 00:06:39.589 22:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.589 22:52:11 -- accel/accel.sh@23 -- # accel_module=software 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:39.589 22:52:11 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:39.589 22:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:39.589 22:52:11 -- accel/accel.sh@21 -- # val=32 00:06:39.589 22:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:39.589 22:52:11 -- accel/accel.sh@21 -- # val=32 00:06:39.589 22:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:39.589 22:52:11 -- accel/accel.sh@21 -- # val=1 00:06:39.589 22:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:39.589 22:52:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:39.589 22:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:39.589 22:52:11 -- accel/accel.sh@21 -- # val=Yes 00:06:39.589 22:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:39.589 22:52:11 -- accel/accel.sh@21 -- # val= 00:06:39.589 22:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:39.589 22:52:11 -- accel/accel.sh@21 -- # val= 00:06:39.589 22:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:39.589 22:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.968 22:52:13 -- accel/accel.sh@21 -- # val= 00:06:40.968 22:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.968 22:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:40.968 22:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:40.968 22:52:13 -- accel/accel.sh@21 -- # val= 00:06:40.968 22:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.968 22:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:40.968 22:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:40.968 22:52:13 -- accel/accel.sh@21 -- # val= 00:06:40.968 22:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.968 22:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:40.968 22:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:40.968 22:52:13 -- accel/accel.sh@21 -- # val= 00:06:40.968 22:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.968 22:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:40.968 22:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:40.968 22:52:13 -- accel/accel.sh@21 -- # val= 00:06:40.968 22:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.968 22:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:40.968 22:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:40.968 22:52:13 -- accel/accel.sh@21 -- # val= 00:06:40.968 22:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.968 22:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:40.968 22:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:40.968 22:52:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:40.968 22:52:13 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:40.968 22:52:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.968 00:06:40.968 real 0m2.622s 00:06:40.968 user 0m2.375s 00:06:40.968 sys 0m0.254s 00:06:40.969 22:52:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.969 22:52:13 -- common/autotest_common.sh@10 -- # set +x 00:06:40.969 ************************************ 00:06:40.969 END TEST accel_decmop_full 00:06:40.969 ************************************ 00:06:40.969 22:52:13 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:40.969 22:52:13 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:40.969 22:52:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.969 22:52:13 -- common/autotest_common.sh@10 -- # set +x 00:06:40.969 ************************************ 00:06:40.969 START TEST accel_decomp_mcore 00:06:40.969 ************************************ 00:06:40.969 22:52:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:40.969 22:52:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.969 22:52:13 -- accel/accel.sh@17 -- # local accel_module 00:06:40.969 22:52:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:40.969 22:52:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:40.969 22:52:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.969 22:52:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.969 22:52:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.969 22:52:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.969 22:52:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.969 22:52:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.969 22:52:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.969 22:52:13 -- accel/accel.sh@42 -- # jq -r . 00:06:40.969 [2024-07-24 22:52:13.091872] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:40.969 [2024-07-24 22:52:13.091945] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3043433 ] 00:06:40.969 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.969 [2024-07-24 22:52:13.162909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:40.969 [2024-07-24 22:52:13.200494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.969 [2024-07-24 22:52:13.200585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.969 [2024-07-24 22:52:13.200689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.969 [2024-07-24 22:52:13.200691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.347 22:52:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:42.347 00:06:42.347 SPDK Configuration: 00:06:42.347 Core mask: 0xf 00:06:42.347 00:06:42.347 Accel Perf Configuration: 00:06:42.347 Workload Type: decompress 00:06:42.347 Transfer size: 4096 bytes 00:06:42.347 Vector count 1 00:06:42.347 Module: software 00:06:42.347 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:42.347 Queue depth: 32 00:06:42.347 Allocate depth: 32 00:06:42.347 # threads/core: 1 00:06:42.347 Run time: 1 seconds 00:06:42.347 Verify: Yes 00:06:42.347 00:06:42.347 Running for 1 seconds... 00:06:42.347 00:06:42.347 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.347 ------------------------------------------------------------------------------------ 00:06:42.347 0,0 74304/s 136 MiB/s 0 0 00:06:42.347 3,0 74656/s 137 MiB/s 0 0 00:06:42.347 2,0 74336/s 137 MiB/s 0 0 00:06:42.347 1,0 74432/s 137 MiB/s 0 0 00:06:42.347 ==================================================================================== 00:06:42.347 Total 297728/s 1163 MiB/s 0 0' 00:06:42.347 22:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:42.347 22:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:42.347 22:52:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:42.347 22:52:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:42.347 22:52:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.347 22:52:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.347 22:52:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.347 22:52:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.347 22:52:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.347 22:52:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.347 22:52:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.347 22:52:14 -- accel/accel.sh@42 -- # jq -r . 00:06:42.347 [2024-07-24 22:52:14.404105] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:42.347 [2024-07-24 22:52:14.404179] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3043696 ] 00:06:42.347 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.347 [2024-07-24 22:52:14.474416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.347 [2024-07-24 22:52:14.510831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.347 [2024-07-24 22:52:14.510931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.347 [2024-07-24 22:52:14.511029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.347 [2024-07-24 22:52:14.511031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.347 22:52:14 -- accel/accel.sh@21 -- # val= 00:06:42.347 22:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.347 22:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:42.347 22:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:42.347 22:52:14 -- accel/accel.sh@21 -- # val= 00:06:42.347 22:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.347 22:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:42.347 22:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:42.347 22:52:14 -- accel/accel.sh@21 -- # val= 00:06:42.347 22:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.347 22:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:42.347 22:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:42.347 22:52:14 -- accel/accel.sh@21 -- # val=0xf 00:06:42.347 22:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.347 22:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:42.347 22:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:42.347 22:52:14 -- accel/accel.sh@21 -- # val= 00:06:42.347 22:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.347 22:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:42.347 22:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:42.347 22:52:14 -- accel/accel.sh@21 -- # val= 00:06:42.347 22:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.347 22:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:42.347 22:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:42.347 22:52:14 -- accel/accel.sh@21 -- # val=decompress 00:06:42.347 22:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.347 22:52:14 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:42.347 22:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:42.347 22:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:42.348 22:52:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.348 22:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:42.348 22:52:14 -- accel/accel.sh@21 -- # val= 00:06:42.348 22:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:42.348 22:52:14 -- accel/accel.sh@21 -- # val=software 00:06:42.348 22:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.348 22:52:14 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:42.348 22:52:14 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:42.348 22:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:42.348 22:52:14 -- accel/accel.sh@21 -- # val=32 00:06:42.348 22:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:42.348 22:52:14 -- accel/accel.sh@21 -- # val=32 00:06:42.348 22:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:42.348 22:52:14 -- accel/accel.sh@21 -- # val=1 00:06:42.348 22:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:42.348 22:52:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.348 22:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:42.348 22:52:14 -- accel/accel.sh@21 -- # val=Yes 00:06:42.348 22:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:42.348 22:52:14 -- accel/accel.sh@21 -- # val= 00:06:42.348 22:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:42.348 22:52:14 -- accel/accel.sh@21 -- # val= 00:06:42.348 22:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:42.348 22:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:52:15 -- accel/accel.sh@21 -- # val= 00:06:43.286 22:52:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:52:15 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:52:15 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:52:15 -- accel/accel.sh@21 -- # val= 00:06:43.286 22:52:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:52:15 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:52:15 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:52:15 -- accel/accel.sh@21 -- # val= 00:06:43.286 22:52:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:52:15 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:52:15 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:52:15 -- accel/accel.sh@21 -- # val= 00:06:43.286 22:52:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:52:15 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:52:15 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:52:15 -- accel/accel.sh@21 -- # val= 00:06:43.286 22:52:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:52:15 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:52:15 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:52:15 -- accel/accel.sh@21 -- # val= 00:06:43.286 22:52:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:52:15 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:52:15 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:52:15 -- accel/accel.sh@21 -- # val= 00:06:43.286 22:52:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:52:15 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:52:15 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:52:15 -- accel/accel.sh@21 -- # val= 00:06:43.286 22:52:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:52:15 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:52:15 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:52:15 -- accel/accel.sh@21 -- # val= 00:06:43.286 22:52:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:52:15 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:52:15 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:52:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:43.286 22:52:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:43.286 22:52:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.286 00:06:43.286 real 0m2.630s 00:06:43.286 user 0m9.012s 00:06:43.286 sys 0m0.285s 00:06:43.286 22:52:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.286 22:52:15 -- common/autotest_common.sh@10 -- # set +x 00:06:43.286 ************************************ 00:06:43.286 END TEST accel_decomp_mcore 00:06:43.286 ************************************ 00:06:43.545 22:52:15 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.545 22:52:15 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:43.545 22:52:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.545 22:52:15 -- common/autotest_common.sh@10 -- # set +x 00:06:43.545 ************************************ 00:06:43.545 START TEST accel_decomp_full_mcore 00:06:43.545 ************************************ 00:06:43.545 22:52:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.545 22:52:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.545 22:52:15 -- accel/accel.sh@17 -- # local accel_module 00:06:43.545 22:52:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.545 22:52:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.545 22:52:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.545 22:52:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.545 22:52:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.545 22:52:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.545 22:52:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.545 22:52:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.545 22:52:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.545 22:52:15 -- accel/accel.sh@42 -- # jq -r . 00:06:43.545 [2024-07-24 22:52:15.771590] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:43.545 [2024-07-24 22:52:15.771658] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3043980 ] 00:06:43.545 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.545 [2024-07-24 22:52:15.842464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:43.545 [2024-07-24 22:52:15.879571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.545 [2024-07-24 22:52:15.879668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.545 [2024-07-24 22:52:15.879764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.545 [2024-07-24 22:52:15.879767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.923 22:52:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:44.923 00:06:44.923 SPDK Configuration: 00:06:44.923 Core mask: 0xf 00:06:44.923 00:06:44.923 Accel Perf Configuration: 00:06:44.923 Workload Type: decompress 00:06:44.923 Transfer size: 111250 bytes 00:06:44.923 Vector count 1 00:06:44.923 Module: software 00:06:44.923 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.923 Queue depth: 32 00:06:44.923 Allocate depth: 32 00:06:44.923 # threads/core: 1 00:06:44.923 Run time: 1 seconds 00:06:44.923 Verify: Yes 00:06:44.923 00:06:44.923 Running for 1 seconds... 00:06:44.923 00:06:44.923 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:44.923 ------------------------------------------------------------------------------------ 00:06:44.923 0,0 5728/s 236 MiB/s 0 0 00:06:44.923 3,0 5728/s 236 MiB/s 0 0 00:06:44.923 2,0 5728/s 236 MiB/s 0 0 00:06:44.923 1,0 5728/s 236 MiB/s 0 0 00:06:44.923 ==================================================================================== 00:06:44.923 Total 22912/s 2430 MiB/s 0 0' 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.923 22:52:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:44.923 22:52:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:44.923 22:52:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.923 22:52:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.923 22:52:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.923 22:52:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.923 22:52:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.923 22:52:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.923 22:52:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.923 22:52:17 -- accel/accel.sh@42 -- # jq -r . 00:06:44.923 [2024-07-24 22:52:17.089345] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:44.923 [2024-07-24 22:52:17.089411] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3044256 ] 00:06:44.923 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.923 [2024-07-24 22:52:17.158315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:44.923 [2024-07-24 22:52:17.194629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.923 [2024-07-24 22:52:17.194731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.923 [2024-07-24 22:52:17.194785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.923 [2024-07-24 22:52:17.194787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.923 22:52:17 -- accel/accel.sh@21 -- # val= 00:06:44.923 22:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.923 22:52:17 -- accel/accel.sh@21 -- # val= 00:06:44.923 22:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.923 22:52:17 -- accel/accel.sh@21 -- # val= 00:06:44.923 22:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.923 22:52:17 -- accel/accel.sh@21 -- # val=0xf 00:06:44.923 22:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.923 22:52:17 -- accel/accel.sh@21 -- # val= 00:06:44.923 22:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.923 22:52:17 -- accel/accel.sh@21 -- # val= 00:06:44.923 22:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.923 22:52:17 -- accel/accel.sh@21 -- # val=decompress 00:06:44.923 22:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.923 22:52:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.923 22:52:17 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:44.923 22:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.923 22:52:17 -- accel/accel.sh@21 -- # val= 00:06:44.923 22:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.923 22:52:17 -- accel/accel.sh@21 -- # val=software 00:06:44.923 22:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.923 22:52:17 -- accel/accel.sh@23 -- # accel_module=software 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.923 22:52:17 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.923 22:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.923 22:52:17 -- accel/accel.sh@21 -- # val=32 00:06:44.923 22:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.923 22:52:17 -- accel/accel.sh@21 -- # val=32 00:06:44.923 22:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.923 22:52:17 -- accel/accel.sh@21 -- # val=1 00:06:44.923 22:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.923 22:52:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:44.923 22:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.923 22:52:17 -- accel/accel.sh@21 -- # val=Yes 00:06:44.923 22:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.923 22:52:17 -- accel/accel.sh@21 -- # val= 00:06:44.923 22:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:44.923 22:52:17 -- accel/accel.sh@21 -- # val= 00:06:44.923 22:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:44.923 22:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:46.301 22:52:18 -- accel/accel.sh@21 -- # val= 00:06:46.301 22:52:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.301 22:52:18 -- accel/accel.sh@20 -- # IFS=: 00:06:46.301 22:52:18 -- accel/accel.sh@20 -- # read -r var val 00:06:46.301 22:52:18 -- accel/accel.sh@21 -- # val= 00:06:46.301 22:52:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.301 22:52:18 -- accel/accel.sh@20 -- # IFS=: 00:06:46.301 22:52:18 -- accel/accel.sh@20 -- # read -r var val 00:06:46.301 22:52:18 -- accel/accel.sh@21 -- # val= 00:06:46.301 22:52:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.301 22:52:18 -- accel/accel.sh@20 -- # IFS=: 00:06:46.301 22:52:18 -- accel/accel.sh@20 -- # read -r var val 00:06:46.301 22:52:18 -- accel/accel.sh@21 -- # val= 00:06:46.301 22:52:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.301 22:52:18 -- accel/accel.sh@20 -- # IFS=: 00:06:46.301 22:52:18 -- accel/accel.sh@20 -- # read -r var val 00:06:46.301 22:52:18 -- accel/accel.sh@21 -- # val= 00:06:46.301 22:52:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.301 22:52:18 -- accel/accel.sh@20 -- # IFS=: 00:06:46.301 22:52:18 -- accel/accel.sh@20 -- # read -r var val 00:06:46.301 22:52:18 -- accel/accel.sh@21 -- # val= 00:06:46.301 22:52:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.301 22:52:18 -- accel/accel.sh@20 -- # IFS=: 00:06:46.301 22:52:18 -- accel/accel.sh@20 -- # read -r var val 00:06:46.301 22:52:18 -- accel/accel.sh@21 -- # val= 00:06:46.301 22:52:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.301 22:52:18 -- accel/accel.sh@20 -- # IFS=: 00:06:46.301 22:52:18 -- accel/accel.sh@20 -- # read -r var val 00:06:46.301 22:52:18 -- accel/accel.sh@21 -- # val= 00:06:46.301 22:52:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.301 22:52:18 -- accel/accel.sh@20 -- # IFS=: 00:06:46.301 22:52:18 -- accel/accel.sh@20 -- # read -r var val 00:06:46.301 22:52:18 -- accel/accel.sh@21 -- # val= 00:06:46.301 22:52:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.301 22:52:18 -- accel/accel.sh@20 -- # IFS=: 00:06:46.302 22:52:18 -- accel/accel.sh@20 -- # read -r var val 00:06:46.302 22:52:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.302 22:52:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:46.302 22:52:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.302 00:06:46.302 real 0m2.644s 00:06:46.302 user 0m9.095s 00:06:46.302 sys 0m0.266s 00:06:46.302 22:52:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.302 22:52:18 -- common/autotest_common.sh@10 -- # set +x 00:06:46.302 ************************************ 00:06:46.302 END TEST accel_decomp_full_mcore 00:06:46.302 ************************************ 00:06:46.302 22:52:18 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:46.302 22:52:18 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:46.302 22:52:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.302 22:52:18 -- common/autotest_common.sh@10 -- # set +x 00:06:46.302 ************************************ 00:06:46.302 START TEST accel_decomp_mthread 00:06:46.302 ************************************ 00:06:46.302 22:52:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:46.302 22:52:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.302 22:52:18 -- accel/accel.sh@17 -- # local accel_module 00:06:46.302 22:52:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:46.302 22:52:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:46.302 22:52:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.302 22:52:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.302 22:52:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.302 22:52:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.302 22:52:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.302 22:52:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.302 22:52:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.302 22:52:18 -- accel/accel.sh@42 -- # jq -r . 00:06:46.302 [2024-07-24 22:52:18.461766] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:46.302 [2024-07-24 22:52:18.461833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3044538 ] 00:06:46.302 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.302 [2024-07-24 22:52:18.532412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.302 [2024-07-24 22:52:18.566837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.679 22:52:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:47.679 00:06:47.679 SPDK Configuration: 00:06:47.679 Core mask: 0x1 00:06:47.679 00:06:47.679 Accel Perf Configuration: 00:06:47.679 Workload Type: decompress 00:06:47.679 Transfer size: 4096 bytes 00:06:47.679 Vector count 1 00:06:47.679 Module: software 00:06:47.679 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.679 Queue depth: 32 00:06:47.679 Allocate depth: 32 00:06:47.679 # threads/core: 2 00:06:47.679 Run time: 1 seconds 00:06:47.679 Verify: Yes 00:06:47.679 00:06:47.679 Running for 1 seconds... 00:06:47.679 00:06:47.679 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:47.679 ------------------------------------------------------------------------------------ 00:06:47.679 0,1 44384/s 81 MiB/s 0 0 00:06:47.679 0,0 44256/s 81 MiB/s 0 0 00:06:47.679 ==================================================================================== 00:06:47.679 Total 88640/s 346 MiB/s 0 0' 00:06:47.679 22:52:19 -- accel/accel.sh@20 -- # IFS=: 00:06:47.679 22:52:19 -- accel/accel.sh@20 -- # read -r var val 00:06:47.679 22:52:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:47.679 22:52:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:47.679 22:52:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.679 22:52:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.679 22:52:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.679 22:52:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.679 22:52:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.679 22:52:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.679 22:52:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.679 22:52:19 -- accel/accel.sh@42 -- # jq -r . 00:06:47.679 [2024-07-24 22:52:19.766546] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:47.679 [2024-07-24 22:52:19.766614] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3044774 ] 00:06:47.679 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.679 [2024-07-24 22:52:19.837596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.679 [2024-07-24 22:52:19.871439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.680 22:52:19 -- accel/accel.sh@21 -- # val= 00:06:47.680 22:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # IFS=: 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # read -r var val 00:06:47.680 22:52:19 -- accel/accel.sh@21 -- # val= 00:06:47.680 22:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # IFS=: 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # read -r var val 00:06:47.680 22:52:19 -- accel/accel.sh@21 -- # val= 00:06:47.680 22:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # IFS=: 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # read -r var val 00:06:47.680 22:52:19 -- accel/accel.sh@21 -- # val=0x1 00:06:47.680 22:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # IFS=: 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # read -r var val 00:06:47.680 22:52:19 -- accel/accel.sh@21 -- # val= 00:06:47.680 22:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # IFS=: 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # read -r var val 00:06:47.680 22:52:19 -- accel/accel.sh@21 -- # val= 00:06:47.680 22:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # IFS=: 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # read -r var val 00:06:47.680 22:52:19 -- accel/accel.sh@21 -- # val=decompress 00:06:47.680 22:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.680 22:52:19 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # IFS=: 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # read -r var val 00:06:47.680 22:52:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:47.680 22:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # IFS=: 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # read -r var val 00:06:47.680 22:52:19 -- accel/accel.sh@21 -- # val= 00:06:47.680 22:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # IFS=: 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # read -r var val 00:06:47.680 22:52:19 -- accel/accel.sh@21 -- # val=software 00:06:47.680 22:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.680 22:52:19 -- accel/accel.sh@23 -- # accel_module=software 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # IFS=: 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # read -r var val 00:06:47.680 22:52:19 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.680 22:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # IFS=: 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # read -r var val 00:06:47.680 22:52:19 -- accel/accel.sh@21 -- # val=32 00:06:47.680 22:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # IFS=: 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # read -r var val 00:06:47.680 22:52:19 -- accel/accel.sh@21 -- # val=32 00:06:47.680 22:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # IFS=: 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # read -r var val 00:06:47.680 22:52:19 -- accel/accel.sh@21 -- # val=2 00:06:47.680 22:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # IFS=: 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # read -r var val 00:06:47.680 22:52:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:47.680 22:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # IFS=: 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # read -r var val 00:06:47.680 22:52:19 -- accel/accel.sh@21 -- # val=Yes 00:06:47.680 22:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # IFS=: 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # read -r var val 00:06:47.680 22:52:19 -- accel/accel.sh@21 -- # val= 00:06:47.680 22:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # IFS=: 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # read -r var val 00:06:47.680 22:52:19 -- accel/accel.sh@21 -- # val= 00:06:47.680 22:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # IFS=: 00:06:47.680 22:52:19 -- accel/accel.sh@20 -- # read -r var val 00:06:48.629 22:52:21 -- accel/accel.sh@21 -- # val= 00:06:48.629 22:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.629 22:52:21 -- accel/accel.sh@20 -- # IFS=: 00:06:48.629 22:52:21 -- accel/accel.sh@20 -- # read -r var val 00:06:48.629 22:52:21 -- accel/accel.sh@21 -- # val= 00:06:48.629 22:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.629 22:52:21 -- accel/accel.sh@20 -- # IFS=: 00:06:48.629 22:52:21 -- accel/accel.sh@20 -- # read -r var val 00:06:48.629 22:52:21 -- accel/accel.sh@21 -- # val= 00:06:48.629 22:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.629 22:52:21 -- accel/accel.sh@20 -- # IFS=: 00:06:48.629 22:52:21 -- accel/accel.sh@20 -- # read -r var val 00:06:48.629 22:52:21 -- accel/accel.sh@21 -- # val= 00:06:48.629 22:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.629 22:52:21 -- accel/accel.sh@20 -- # IFS=: 00:06:48.629 22:52:21 -- accel/accel.sh@20 -- # read -r var val 00:06:48.629 22:52:21 -- accel/accel.sh@21 -- # val= 00:06:48.629 22:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.629 22:52:21 -- accel/accel.sh@20 -- # IFS=: 00:06:48.629 22:52:21 -- accel/accel.sh@20 -- # read -r var val 00:06:48.629 22:52:21 -- accel/accel.sh@21 -- # val= 00:06:48.629 22:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.629 22:52:21 -- accel/accel.sh@20 -- # IFS=: 00:06:48.629 22:52:21 -- accel/accel.sh@20 -- # read -r var val 00:06:48.629 22:52:21 -- accel/accel.sh@21 -- # val= 00:06:48.629 22:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.629 22:52:21 -- accel/accel.sh@20 -- # IFS=: 00:06:48.629 22:52:21 -- accel/accel.sh@20 -- # read -r var val 00:06:48.629 22:52:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:48.629 22:52:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:48.629 22:52:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.629 00:06:48.629 real 0m2.616s 00:06:48.629 user 0m2.374s 00:06:48.629 sys 0m0.250s 00:06:48.629 22:52:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.629 22:52:21 -- common/autotest_common.sh@10 -- # set +x 00:06:48.629 ************************************ 00:06:48.629 END TEST accel_decomp_mthread 00:06:48.629 ************************************ 00:06:48.900 22:52:21 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:48.900 22:52:21 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:48.900 22:52:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.900 22:52:21 -- common/autotest_common.sh@10 -- # set +x 00:06:48.900 ************************************ 00:06:48.900 START TEST accel_deomp_full_mthread 00:06:48.900 ************************************ 00:06:48.900 22:52:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:48.900 22:52:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.900 22:52:21 -- accel/accel.sh@17 -- # local accel_module 00:06:48.900 22:52:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:48.900 22:52:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:48.900 22:52:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.900 22:52:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.900 22:52:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.900 22:52:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.900 22:52:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.900 22:52:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.900 22:52:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.900 22:52:21 -- accel/accel.sh@42 -- # jq -r . 00:06:48.900 [2024-07-24 22:52:21.123790] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:48.900 [2024-07-24 22:52:21.123855] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3044971 ] 00:06:48.900 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.900 [2024-07-24 22:52:21.194705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.900 [2024-07-24 22:52:21.230145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.278 22:52:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:50.278 00:06:50.278 SPDK Configuration: 00:06:50.278 Core mask: 0x1 00:06:50.278 00:06:50.278 Accel Perf Configuration: 00:06:50.278 Workload Type: decompress 00:06:50.278 Transfer size: 111250 bytes 00:06:50.278 Vector count 1 00:06:50.278 Module: software 00:06:50.278 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:50.278 Queue depth: 32 00:06:50.278 Allocate depth: 32 00:06:50.278 # threads/core: 2 00:06:50.278 Run time: 1 seconds 00:06:50.278 Verify: Yes 00:06:50.278 00:06:50.278 Running for 1 seconds... 00:06:50.278 00:06:50.278 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.278 ------------------------------------------------------------------------------------ 00:06:50.278 0,1 2944/s 121 MiB/s 0 0 00:06:50.278 0,0 2944/s 121 MiB/s 0 0 00:06:50.278 ==================================================================================== 00:06:50.278 Total 5888/s 624 MiB/s 0 0' 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # IFS=: 00:06:50.278 22:52:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # read -r var val 00:06:50.278 22:52:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:50.278 22:52:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.278 22:52:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.278 22:52:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.278 22:52:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.278 22:52:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.278 22:52:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.278 22:52:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.278 22:52:22 -- accel/accel.sh@42 -- # jq -r . 00:06:50.278 [2024-07-24 22:52:22.447903] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:50.278 [2024-07-24 22:52:22.447970] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3045130 ] 00:06:50.278 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.278 [2024-07-24 22:52:22.517429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.278 [2024-07-24 22:52:22.552038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.278 22:52:22 -- accel/accel.sh@21 -- # val= 00:06:50.278 22:52:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # IFS=: 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # read -r var val 00:06:50.278 22:52:22 -- accel/accel.sh@21 -- # val= 00:06:50.278 22:52:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # IFS=: 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # read -r var val 00:06:50.278 22:52:22 -- accel/accel.sh@21 -- # val= 00:06:50.278 22:52:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # IFS=: 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # read -r var val 00:06:50.278 22:52:22 -- accel/accel.sh@21 -- # val=0x1 00:06:50.278 22:52:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # IFS=: 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # read -r var val 00:06:50.278 22:52:22 -- accel/accel.sh@21 -- # val= 00:06:50.278 22:52:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # IFS=: 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # read -r var val 00:06:50.278 22:52:22 -- accel/accel.sh@21 -- # val= 00:06:50.278 22:52:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # IFS=: 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # read -r var val 00:06:50.278 22:52:22 -- accel/accel.sh@21 -- # val=decompress 00:06:50.278 22:52:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.278 22:52:22 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # IFS=: 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # read -r var val 00:06:50.278 22:52:22 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:50.278 22:52:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # IFS=: 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # read -r var val 00:06:50.278 22:52:22 -- accel/accel.sh@21 -- # val= 00:06:50.278 22:52:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # IFS=: 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # read -r var val 00:06:50.278 22:52:22 -- accel/accel.sh@21 -- # val=software 00:06:50.278 22:52:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.278 22:52:22 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # IFS=: 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # read -r var val 00:06:50.278 22:52:22 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:50.278 22:52:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # IFS=: 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # read -r var val 00:06:50.278 22:52:22 -- accel/accel.sh@21 -- # val=32 00:06:50.278 22:52:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # IFS=: 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # read -r var val 00:06:50.278 22:52:22 -- accel/accel.sh@21 -- # val=32 00:06:50.278 22:52:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # IFS=: 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # read -r var val 00:06:50.278 22:52:22 -- accel/accel.sh@21 -- # val=2 00:06:50.278 22:52:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # IFS=: 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # read -r var val 00:06:50.278 22:52:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.278 22:52:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # IFS=: 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # read -r var val 00:06:50.278 22:52:22 -- accel/accel.sh@21 -- # val=Yes 00:06:50.278 22:52:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # IFS=: 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # read -r var val 00:06:50.278 22:52:22 -- accel/accel.sh@21 -- # val= 00:06:50.278 22:52:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # IFS=: 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # read -r var val 00:06:50.278 22:52:22 -- accel/accel.sh@21 -- # val= 00:06:50.278 22:52:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # IFS=: 00:06:50.278 22:52:22 -- accel/accel.sh@20 -- # read -r var val 00:06:51.655 22:52:23 -- accel/accel.sh@21 -- # val= 00:06:51.655 22:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.655 22:52:23 -- accel/accel.sh@20 -- # IFS=: 00:06:51.655 22:52:23 -- accel/accel.sh@20 -- # read -r var val 00:06:51.655 22:52:23 -- accel/accel.sh@21 -- # val= 00:06:51.655 22:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.655 22:52:23 -- accel/accel.sh@20 -- # IFS=: 00:06:51.655 22:52:23 -- accel/accel.sh@20 -- # read -r var val 00:06:51.655 22:52:23 -- accel/accel.sh@21 -- # val= 00:06:51.655 22:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.655 22:52:23 -- accel/accel.sh@20 -- # IFS=: 00:06:51.655 22:52:23 -- accel/accel.sh@20 -- # read -r var val 00:06:51.655 22:52:23 -- accel/accel.sh@21 -- # val= 00:06:51.655 22:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.655 22:52:23 -- accel/accel.sh@20 -- # IFS=: 00:06:51.655 22:52:23 -- accel/accel.sh@20 -- # read -r var val 00:06:51.655 22:52:23 -- accel/accel.sh@21 -- # val= 00:06:51.655 22:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.655 22:52:23 -- accel/accel.sh@20 -- # IFS=: 00:06:51.655 22:52:23 -- accel/accel.sh@20 -- # read -r var val 00:06:51.655 22:52:23 -- accel/accel.sh@21 -- # val= 00:06:51.655 22:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.655 22:52:23 -- accel/accel.sh@20 -- # IFS=: 00:06:51.655 22:52:23 -- accel/accel.sh@20 -- # read -r var val 00:06:51.655 22:52:23 -- accel/accel.sh@21 -- # val= 00:06:51.655 22:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.655 22:52:23 -- accel/accel.sh@20 -- # IFS=: 00:06:51.655 22:52:23 -- accel/accel.sh@20 -- # read -r var val 00:06:51.655 22:52:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.655 22:52:23 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:51.655 22:52:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.655 00:06:51.655 real 0m2.653s 00:06:51.656 user 0m2.397s 00:06:51.656 sys 0m0.265s 00:06:51.656 22:52:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.656 22:52:23 -- common/autotest_common.sh@10 -- # set +x 00:06:51.656 ************************************ 00:06:51.656 END TEST accel_deomp_full_mthread 00:06:51.656 ************************************ 00:06:51.656 22:52:23 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:51.656 22:52:23 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:51.656 22:52:23 -- accel/accel.sh@129 -- # build_accel_config 00:06:51.656 22:52:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:51.656 22:52:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.656 22:52:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.656 22:52:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.656 22:52:23 -- common/autotest_common.sh@10 -- # set +x 00:06:51.656 22:52:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.656 22:52:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.656 22:52:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.656 22:52:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.656 22:52:23 -- accel/accel.sh@42 -- # jq -r . 00:06:51.656 ************************************ 00:06:51.656 START TEST accel_dif_functional_tests 00:06:51.656 ************************************ 00:06:51.656 22:52:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:51.656 [2024-07-24 22:52:23.842900] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:51.656 [2024-07-24 22:52:23.842953] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3045391 ] 00:06:51.656 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.656 [2024-07-24 22:52:23.914495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.656 [2024-07-24 22:52:23.951813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.656 [2024-07-24 22:52:23.951912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.656 [2024-07-24 22:52:23.951914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.656 00:06:51.656 00:06:51.656 CUnit - A unit testing framework for C - Version 2.1-3 00:06:51.656 http://cunit.sourceforge.net/ 00:06:51.656 00:06:51.656 00:06:51.656 Suite: accel_dif 00:06:51.656 Test: verify: DIF generated, GUARD check ...passed 00:06:51.656 Test: verify: DIF generated, APPTAG check ...passed 00:06:51.656 Test: verify: DIF generated, REFTAG check ...passed 00:06:51.656 Test: verify: DIF not generated, GUARD check ...[2024-07-24 22:52:24.013897] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:51.656 [2024-07-24 22:52:24.013943] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:51.656 passed 00:06:51.656 Test: verify: DIF not generated, APPTAG check ...[2024-07-24 22:52:24.013974] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:51.656 [2024-07-24 22:52:24.013990] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:51.656 passed 00:06:51.656 Test: verify: DIF not generated, REFTAG check ...[2024-07-24 22:52:24.014010] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:51.656 [2024-07-24 22:52:24.014026] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:51.656 passed 00:06:51.656 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:51.656 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-24 22:52:24.014068] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:51.656 passed 00:06:51.656 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:51.656 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:51.656 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:51.656 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-24 22:52:24.014169] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:51.656 passed 00:06:51.656 Test: generate copy: DIF generated, GUARD check ...passed 00:06:51.656 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:51.656 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:51.656 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:51.656 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:51.656 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:51.656 Test: generate copy: iovecs-len validate ...[2024-07-24 22:52:24.014332] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:51.656 passed 00:06:51.656 Test: generate copy: buffer alignment validate ...passed 00:06:51.656 00:06:51.656 Run Summary: Type Total Ran Passed Failed Inactive 00:06:51.656 suites 1 1 n/a 0 0 00:06:51.656 tests 20 20 20 0 0 00:06:51.656 asserts 204 204 204 0 n/a 00:06:51.656 00:06:51.656 Elapsed time = 0.000 seconds 00:06:51.914 00:06:51.914 real 0m0.370s 00:06:51.914 user 0m0.547s 00:06:51.914 sys 0m0.162s 00:06:51.914 22:52:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.914 22:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:51.914 ************************************ 00:06:51.914 END TEST accel_dif_functional_tests 00:06:51.914 ************************************ 00:06:51.914 00:06:51.915 real 0m55.829s 00:06:51.915 user 1m3.451s 00:06:51.915 sys 0m7.109s 00:06:51.915 22:52:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.915 22:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:51.915 ************************************ 00:06:51.915 END TEST accel 00:06:51.915 ************************************ 00:06:51.915 22:52:24 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:51.915 22:52:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:51.915 22:52:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.915 22:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:51.915 ************************************ 00:06:51.915 START TEST accel_rpc 00:06:51.915 ************************************ 00:06:51.915 22:52:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:52.219 * Looking for test storage... 00:06:52.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:52.219 22:52:24 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:52.219 22:52:24 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3045684 00:06:52.219 22:52:24 -- accel/accel_rpc.sh@15 -- # waitforlisten 3045684 00:06:52.219 22:52:24 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:52.219 22:52:24 -- common/autotest_common.sh@819 -- # '[' -z 3045684 ']' 00:06:52.219 22:52:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.219 22:52:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:52.219 22:52:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.219 22:52:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:52.219 22:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:52.219 [2024-07-24 22:52:24.416302] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:52.219 [2024-07-24 22:52:24.416358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3045684 ] 00:06:52.219 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.219 [2024-07-24 22:52:24.486071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.219 [2024-07-24 22:52:24.523597] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:52.219 [2024-07-24 22:52:24.523731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.793 22:52:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:52.793 22:52:25 -- common/autotest_common.sh@852 -- # return 0 00:06:52.793 22:52:25 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:52.793 22:52:25 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:52.793 22:52:25 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:52.793 22:52:25 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:52.793 22:52:25 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:52.793 22:52:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:52.793 22:52:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.793 22:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:52.793 ************************************ 00:06:52.793 START TEST accel_assign_opcode 00:06:52.793 ************************************ 00:06:52.793 22:52:25 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:06:52.793 22:52:25 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:52.793 22:52:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.793 22:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:52.793 [2024-07-24 22:52:25.209754] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:52.793 22:52:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.793 22:52:25 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:52.793 22:52:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.793 22:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:52.793 [2024-07-24 22:52:25.217766] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:52.793 22:52:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.793 22:52:25 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:52.793 22:52:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.793 22:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:53.053 22:52:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:53.053 22:52:25 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:53.053 22:52:25 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:53.053 22:52:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:53.053 22:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:53.053 22:52:25 -- accel/accel_rpc.sh@42 -- # grep software 00:06:53.053 22:52:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:53.053 software 00:06:53.053 00:06:53.053 real 0m0.221s 00:06:53.053 user 0m0.051s 00:06:53.053 sys 0m0.009s 00:06:53.053 22:52:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.053 22:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:53.053 ************************************ 00:06:53.053 END TEST accel_assign_opcode 00:06:53.053 ************************************ 00:06:53.053 22:52:25 -- accel/accel_rpc.sh@55 -- # killprocess 3045684 00:06:53.053 22:52:25 -- common/autotest_common.sh@926 -- # '[' -z 3045684 ']' 00:06:53.053 22:52:25 -- common/autotest_common.sh@930 -- # kill -0 3045684 00:06:53.053 22:52:25 -- common/autotest_common.sh@931 -- # uname 00:06:53.053 22:52:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:53.053 22:52:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3045684 00:06:53.312 22:52:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:53.313 22:52:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:53.313 22:52:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3045684' 00:06:53.313 killing process with pid 3045684 00:06:53.313 22:52:25 -- common/autotest_common.sh@945 -- # kill 3045684 00:06:53.313 22:52:25 -- common/autotest_common.sh@950 -- # wait 3045684 00:06:53.572 00:06:53.572 real 0m1.553s 00:06:53.572 user 0m1.573s 00:06:53.572 sys 0m0.468s 00:06:53.572 22:52:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.572 22:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:53.572 ************************************ 00:06:53.572 END TEST accel_rpc 00:06:53.572 ************************************ 00:06:53.572 22:52:25 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:53.572 22:52:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:53.572 22:52:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.572 22:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:53.572 ************************************ 00:06:53.572 START TEST app_cmdline 00:06:53.572 ************************************ 00:06:53.572 22:52:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:53.572 * Looking for test storage... 00:06:53.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:53.572 22:52:25 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:53.572 22:52:25 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3046044 00:06:53.572 22:52:25 -- app/cmdline.sh@18 -- # waitforlisten 3046044 00:06:53.572 22:52:25 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:53.572 22:52:25 -- common/autotest_common.sh@819 -- # '[' -z 3046044 ']' 00:06:53.572 22:52:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.572 22:52:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:53.572 22:52:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.572 22:52:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:53.572 22:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:53.832 [2024-07-24 22:52:26.022865] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:53.832 [2024-07-24 22:52:26.022921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3046044 ] 00:06:53.832 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.832 [2024-07-24 22:52:26.095636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.832 [2024-07-24 22:52:26.132610] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:53.832 [2024-07-24 22:52:26.132729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.399 22:52:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:54.400 22:52:26 -- common/autotest_common.sh@852 -- # return 0 00:06:54.400 22:52:26 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:54.658 { 00:06:54.658 "version": "SPDK v24.01.1-pre git sha1 dbef7efac", 00:06:54.658 "fields": { 00:06:54.658 "major": 24, 00:06:54.658 "minor": 1, 00:06:54.658 "patch": 1, 00:06:54.658 "suffix": "-pre", 00:06:54.658 "commit": "dbef7efac" 00:06:54.658 } 00:06:54.658 } 00:06:54.658 22:52:26 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:54.658 22:52:26 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:54.658 22:52:26 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:54.658 22:52:26 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:54.658 22:52:26 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:54.658 22:52:26 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:54.658 22:52:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.658 22:52:26 -- app/cmdline.sh@26 -- # sort 00:06:54.658 22:52:26 -- common/autotest_common.sh@10 -- # set +x 00:06:54.658 22:52:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.658 22:52:27 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:54.658 22:52:27 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:54.658 22:52:27 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.658 22:52:27 -- common/autotest_common.sh@640 -- # local es=0 00:06:54.658 22:52:27 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.658 22:52:27 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:54.658 22:52:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:54.658 22:52:27 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:54.659 22:52:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:54.659 22:52:27 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:54.659 22:52:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:54.659 22:52:27 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:54.659 22:52:27 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:54.659 22:52:27 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.918 request: 00:06:54.918 { 00:06:54.918 "method": "env_dpdk_get_mem_stats", 00:06:54.918 "req_id": 1 00:06:54.918 } 00:06:54.918 Got JSON-RPC error response 00:06:54.918 response: 00:06:54.918 { 00:06:54.918 "code": -32601, 00:06:54.918 "message": "Method not found" 00:06:54.918 } 00:06:54.918 22:52:27 -- common/autotest_common.sh@643 -- # es=1 00:06:54.918 22:52:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:54.918 22:52:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:54.918 22:52:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:54.918 22:52:27 -- app/cmdline.sh@1 -- # killprocess 3046044 00:06:54.918 22:52:27 -- common/autotest_common.sh@926 -- # '[' -z 3046044 ']' 00:06:54.918 22:52:27 -- common/autotest_common.sh@930 -- # kill -0 3046044 00:06:54.918 22:52:27 -- common/autotest_common.sh@931 -- # uname 00:06:54.918 22:52:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:54.918 22:52:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3046044 00:06:54.918 22:52:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:54.918 22:52:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:54.918 22:52:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3046044' 00:06:54.918 killing process with pid 3046044 00:06:54.918 22:52:27 -- common/autotest_common.sh@945 -- # kill 3046044 00:06:54.918 22:52:27 -- common/autotest_common.sh@950 -- # wait 3046044 00:06:55.178 00:06:55.178 real 0m1.655s 00:06:55.178 user 0m1.893s 00:06:55.178 sys 0m0.502s 00:06:55.178 22:52:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.178 22:52:27 -- common/autotest_common.sh@10 -- # set +x 00:06:55.178 ************************************ 00:06:55.178 END TEST app_cmdline 00:06:55.178 ************************************ 00:06:55.178 22:52:27 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:55.178 22:52:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:55.178 22:52:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.178 22:52:27 -- common/autotest_common.sh@10 -- # set +x 00:06:55.178 ************************************ 00:06:55.178 START TEST version 00:06:55.178 ************************************ 00:06:55.178 22:52:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:55.437 * Looking for test storage... 00:06:55.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:55.437 22:52:27 -- app/version.sh@17 -- # get_header_version major 00:06:55.437 22:52:27 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:55.437 22:52:27 -- app/version.sh@14 -- # cut -f2 00:06:55.437 22:52:27 -- app/version.sh@14 -- # tr -d '"' 00:06:55.437 22:52:27 -- app/version.sh@17 -- # major=24 00:06:55.437 22:52:27 -- app/version.sh@18 -- # get_header_version minor 00:06:55.437 22:52:27 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:55.437 22:52:27 -- app/version.sh@14 -- # cut -f2 00:06:55.437 22:52:27 -- app/version.sh@14 -- # tr -d '"' 00:06:55.437 22:52:27 -- app/version.sh@18 -- # minor=1 00:06:55.437 22:52:27 -- app/version.sh@19 -- # get_header_version patch 00:06:55.437 22:52:27 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:55.437 22:52:27 -- app/version.sh@14 -- # cut -f2 00:06:55.437 22:52:27 -- app/version.sh@14 -- # tr -d '"' 00:06:55.437 22:52:27 -- app/version.sh@19 -- # patch=1 00:06:55.437 22:52:27 -- app/version.sh@20 -- # get_header_version suffix 00:06:55.437 22:52:27 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:55.437 22:52:27 -- app/version.sh@14 -- # cut -f2 00:06:55.437 22:52:27 -- app/version.sh@14 -- # tr -d '"' 00:06:55.437 22:52:27 -- app/version.sh@20 -- # suffix=-pre 00:06:55.437 22:52:27 -- app/version.sh@22 -- # version=24.1 00:06:55.437 22:52:27 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:55.437 22:52:27 -- app/version.sh@25 -- # version=24.1.1 00:06:55.437 22:52:27 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:55.437 22:52:27 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:55.437 22:52:27 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:55.437 22:52:27 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:55.437 22:52:27 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:55.437 00:06:55.437 real 0m0.181s 00:06:55.437 user 0m0.092s 00:06:55.437 sys 0m0.135s 00:06:55.437 22:52:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.437 22:52:27 -- common/autotest_common.sh@10 -- # set +x 00:06:55.437 ************************************ 00:06:55.437 END TEST version 00:06:55.437 ************************************ 00:06:55.437 22:52:27 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:06:55.437 22:52:27 -- spdk/autotest.sh@204 -- # uname -s 00:06:55.437 22:52:27 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:06:55.437 22:52:27 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:55.437 22:52:27 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:55.437 22:52:27 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:06:55.437 22:52:27 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:06:55.437 22:52:27 -- spdk/autotest.sh@268 -- # timing_exit lib 00:06:55.438 22:52:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:55.438 22:52:27 -- common/autotest_common.sh@10 -- # set +x 00:06:55.438 22:52:27 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:55.438 22:52:27 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:06:55.438 22:52:27 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:06:55.438 22:52:27 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:06:55.438 22:52:27 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:06:55.438 22:52:27 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:06:55.438 22:52:27 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:55.438 22:52:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:55.438 22:52:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.438 22:52:27 -- common/autotest_common.sh@10 -- # set +x 00:06:55.438 ************************************ 00:06:55.438 START TEST nvmf_tcp 00:06:55.438 ************************************ 00:06:55.438 22:52:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:55.697 * Looking for test storage... 00:06:55.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:55.697 22:52:27 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:55.697 22:52:27 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:55.697 22:52:27 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:55.697 22:52:27 -- nvmf/common.sh@7 -- # uname -s 00:06:55.697 22:52:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.697 22:52:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.697 22:52:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.697 22:52:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.697 22:52:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.697 22:52:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.697 22:52:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.697 22:52:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.697 22:52:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.697 22:52:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.697 22:52:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:55.698 22:52:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:55.698 22:52:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.698 22:52:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.698 22:52:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:55.698 22:52:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:55.698 22:52:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.698 22:52:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.698 22:52:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.698 22:52:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.698 22:52:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.698 22:52:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.698 22:52:27 -- paths/export.sh@5 -- # export PATH 00:06:55.698 22:52:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.698 22:52:27 -- nvmf/common.sh@46 -- # : 0 00:06:55.698 22:52:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:55.698 22:52:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:55.698 22:52:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:55.698 22:52:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.698 22:52:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.698 22:52:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:55.698 22:52:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:55.698 22:52:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:55.698 22:52:27 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:55.698 22:52:27 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:55.698 22:52:27 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:55.698 22:52:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:55.698 22:52:27 -- common/autotest_common.sh@10 -- # set +x 00:06:55.698 22:52:28 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:55.698 22:52:28 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:55.698 22:52:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:55.698 22:52:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.698 22:52:28 -- common/autotest_common.sh@10 -- # set +x 00:06:55.698 ************************************ 00:06:55.698 START TEST nvmf_example 00:06:55.698 ************************************ 00:06:55.698 22:52:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:55.698 * Looking for test storage... 00:06:55.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:55.698 22:52:28 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:55.698 22:52:28 -- nvmf/common.sh@7 -- # uname -s 00:06:55.698 22:52:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.698 22:52:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.698 22:52:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.698 22:52:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.698 22:52:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.698 22:52:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.698 22:52:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.698 22:52:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.698 22:52:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.698 22:52:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.958 22:52:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:55.958 22:52:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:55.958 22:52:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.958 22:52:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.958 22:52:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:55.958 22:52:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:55.958 22:52:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.958 22:52:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.958 22:52:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.958 22:52:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.958 22:52:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.958 22:52:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.958 22:52:28 -- paths/export.sh@5 -- # export PATH 00:06:55.958 22:52:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.958 22:52:28 -- nvmf/common.sh@46 -- # : 0 00:06:55.958 22:52:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:55.958 22:52:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:55.958 22:52:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:55.958 22:52:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.958 22:52:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.958 22:52:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:55.958 22:52:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:55.958 22:52:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:55.958 22:52:28 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:55.958 22:52:28 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:55.958 22:52:28 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:55.958 22:52:28 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:55.958 22:52:28 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:55.958 22:52:28 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:55.958 22:52:28 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:55.958 22:52:28 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:55.958 22:52:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:55.958 22:52:28 -- common/autotest_common.sh@10 -- # set +x 00:06:55.958 22:52:28 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:55.958 22:52:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:55.958 22:52:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.958 22:52:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:55.958 22:52:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:55.958 22:52:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:55.958 22:52:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.958 22:52:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:55.958 22:52:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.958 22:52:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:55.958 22:52:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:55.958 22:52:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:55.958 22:52:28 -- common/autotest_common.sh@10 -- # set +x 00:07:02.531 22:52:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:02.531 22:52:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:02.531 22:52:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:02.531 22:52:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:02.531 22:52:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:02.531 22:52:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:02.531 22:52:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:02.531 22:52:34 -- nvmf/common.sh@294 -- # net_devs=() 00:07:02.531 22:52:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:02.531 22:52:34 -- nvmf/common.sh@295 -- # e810=() 00:07:02.531 22:52:34 -- nvmf/common.sh@295 -- # local -ga e810 00:07:02.531 22:52:34 -- nvmf/common.sh@296 -- # x722=() 00:07:02.531 22:52:34 -- nvmf/common.sh@296 -- # local -ga x722 00:07:02.531 22:52:34 -- nvmf/common.sh@297 -- # mlx=() 00:07:02.531 22:52:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:02.531 22:52:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:02.531 22:52:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:02.531 22:52:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:02.531 22:52:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:02.531 22:52:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:02.531 22:52:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:02.531 22:52:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:02.531 22:52:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:02.531 22:52:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:02.531 22:52:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:02.531 22:52:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:02.531 22:52:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:02.531 22:52:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:02.531 22:52:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:02.531 22:52:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:02.531 22:52:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:02.531 22:52:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:02.531 22:52:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:02.531 22:52:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:02.531 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:02.531 22:52:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:02.531 22:52:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:02.531 22:52:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.531 22:52:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.531 22:52:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:02.531 22:52:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:02.531 22:52:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:02.531 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:02.531 22:52:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:02.531 22:52:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:02.531 22:52:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.531 22:52:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.531 22:52:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:02.531 22:52:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:02.531 22:52:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:02.531 22:52:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:02.531 22:52:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:02.531 22:52:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.531 22:52:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:02.531 22:52:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.531 22:52:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:02.531 Found net devices under 0000:af:00.0: cvl_0_0 00:07:02.531 22:52:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.531 22:52:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:02.531 22:52:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.531 22:52:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:02.531 22:52:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.531 22:52:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:02.531 Found net devices under 0000:af:00.1: cvl_0_1 00:07:02.531 22:52:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.531 22:52:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:02.531 22:52:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:02.531 22:52:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:02.531 22:52:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:02.531 22:52:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:02.531 22:52:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:02.531 22:52:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:02.531 22:52:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:02.531 22:52:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:02.531 22:52:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:02.531 22:52:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:02.531 22:52:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:02.531 22:52:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:02.531 22:52:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:02.531 22:52:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:02.531 22:52:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:02.531 22:52:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:02.531 22:52:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:02.531 22:52:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:02.531 22:52:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:02.531 22:52:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:02.531 22:52:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:02.531 22:52:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:02.531 22:52:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:02.531 22:52:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:02.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:02.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:07:02.531 00:07:02.531 --- 10.0.0.2 ping statistics --- 00:07:02.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.531 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:07:02.531 22:52:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:02.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:02.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:07:02.531 00:07:02.531 --- 10.0.0.1 ping statistics --- 00:07:02.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.531 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:07:02.531 22:52:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:02.531 22:52:34 -- nvmf/common.sh@410 -- # return 0 00:07:02.531 22:52:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:02.531 22:52:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:02.531 22:52:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:02.531 22:52:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:02.531 22:52:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:02.531 22:52:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:02.531 22:52:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:02.531 22:52:34 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:02.531 22:52:34 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:02.531 22:52:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:02.531 22:52:34 -- common/autotest_common.sh@10 -- # set +x 00:07:02.531 22:52:34 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:02.531 22:52:34 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:02.531 22:52:34 -- target/nvmf_example.sh@34 -- # nvmfpid=3049661 00:07:02.531 22:52:34 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:02.531 22:52:34 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:02.531 22:52:34 -- target/nvmf_example.sh@36 -- # waitforlisten 3049661 00:07:02.531 22:52:34 -- common/autotest_common.sh@819 -- # '[' -z 3049661 ']' 00:07:02.531 22:52:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.531 22:52:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:02.532 22:52:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.532 22:52:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:02.532 22:52:34 -- common/autotest_common.sh@10 -- # set +x 00:07:02.790 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.359 22:52:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:03.359 22:52:35 -- common/autotest_common.sh@852 -- # return 0 00:07:03.359 22:52:35 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:03.359 22:52:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:03.359 22:52:35 -- common/autotest_common.sh@10 -- # set +x 00:07:03.619 22:52:35 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:03.619 22:52:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:03.619 22:52:35 -- common/autotest_common.sh@10 -- # set +x 00:07:03.619 22:52:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:03.619 22:52:35 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:03.619 22:52:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:03.619 22:52:35 -- common/autotest_common.sh@10 -- # set +x 00:07:03.619 22:52:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:03.619 22:52:35 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:03.619 22:52:35 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:03.619 22:52:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:03.619 22:52:35 -- common/autotest_common.sh@10 -- # set +x 00:07:03.619 22:52:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:03.619 22:52:35 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:03.619 22:52:35 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:03.619 22:52:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:03.619 22:52:35 -- common/autotest_common.sh@10 -- # set +x 00:07:03.619 22:52:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:03.619 22:52:35 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:03.619 22:52:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:03.619 22:52:35 -- common/autotest_common.sh@10 -- # set +x 00:07:03.619 22:52:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:03.619 22:52:35 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:03.619 22:52:35 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:03.619 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.828 Initializing NVMe Controllers 00:07:15.828 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:15.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:15.828 Initialization complete. Launching workers. 00:07:15.828 ======================================================== 00:07:15.828 Latency(us) 00:07:15.828 Device Information : IOPS MiB/s Average min max 00:07:15.828 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17627.10 68.86 3632.56 670.67 15538.59 00:07:15.828 ======================================================== 00:07:15.828 Total : 17627.10 68.86 3632.56 670.67 15538.59 00:07:15.828 00:07:15.828 22:52:46 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:15.828 22:52:46 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:15.828 22:52:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:15.828 22:52:46 -- nvmf/common.sh@116 -- # sync 00:07:15.828 22:52:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:15.828 22:52:46 -- nvmf/common.sh@119 -- # set +e 00:07:15.828 22:52:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:15.828 22:52:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:15.828 rmmod nvme_tcp 00:07:15.828 rmmod nvme_fabrics 00:07:15.828 rmmod nvme_keyring 00:07:15.828 22:52:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:15.828 22:52:46 -- nvmf/common.sh@123 -- # set -e 00:07:15.828 22:52:46 -- nvmf/common.sh@124 -- # return 0 00:07:15.828 22:52:46 -- nvmf/common.sh@477 -- # '[' -n 3049661 ']' 00:07:15.828 22:52:46 -- nvmf/common.sh@478 -- # killprocess 3049661 00:07:15.828 22:52:46 -- common/autotest_common.sh@926 -- # '[' -z 3049661 ']' 00:07:15.828 22:52:46 -- common/autotest_common.sh@930 -- # kill -0 3049661 00:07:15.828 22:52:46 -- common/autotest_common.sh@931 -- # uname 00:07:15.828 22:52:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:15.828 22:52:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3049661 00:07:15.828 22:52:46 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:07:15.828 22:52:46 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:07:15.828 22:52:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3049661' 00:07:15.828 killing process with pid 3049661 00:07:15.828 22:52:46 -- common/autotest_common.sh@945 -- # kill 3049661 00:07:15.828 22:52:46 -- common/autotest_common.sh@950 -- # wait 3049661 00:07:15.828 nvmf threads initialize successfully 00:07:15.828 bdev subsystem init successfully 00:07:15.828 created a nvmf target service 00:07:15.828 create targets's poll groups done 00:07:15.828 all subsystems of target started 00:07:15.828 nvmf target is running 00:07:15.828 all subsystems of target stopped 00:07:15.828 destroy targets's poll groups done 00:07:15.828 destroyed the nvmf target service 00:07:15.828 bdev subsystem finish successfully 00:07:15.828 nvmf threads destroy successfully 00:07:15.828 22:52:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:15.828 22:52:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:15.828 22:52:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:15.828 22:52:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:15.828 22:52:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:15.828 22:52:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.828 22:52:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:15.828 22:52:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.086 22:52:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:16.086 22:52:48 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:16.086 22:52:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:16.086 22:52:48 -- common/autotest_common.sh@10 -- # set +x 00:07:16.086 00:07:16.086 real 0m20.503s 00:07:16.086 user 0m45.543s 00:07:16.086 sys 0m7.192s 00:07:16.086 22:52:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.086 22:52:48 -- common/autotest_common.sh@10 -- # set +x 00:07:16.086 ************************************ 00:07:16.086 END TEST nvmf_example 00:07:16.086 ************************************ 00:07:16.347 22:52:48 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:16.347 22:52:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:16.347 22:52:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.347 22:52:48 -- common/autotest_common.sh@10 -- # set +x 00:07:16.347 ************************************ 00:07:16.347 START TEST nvmf_filesystem 00:07:16.347 ************************************ 00:07:16.347 22:52:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:16.347 * Looking for test storage... 00:07:16.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:16.347 22:52:48 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:16.347 22:52:48 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:16.347 22:52:48 -- common/autotest_common.sh@34 -- # set -e 00:07:16.347 22:52:48 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:16.347 22:52:48 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:16.347 22:52:48 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:16.347 22:52:48 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:16.347 22:52:48 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:16.347 22:52:48 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:16.347 22:52:48 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:16.347 22:52:48 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:16.347 22:52:48 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:16.347 22:52:48 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:16.347 22:52:48 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:16.347 22:52:48 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:16.347 22:52:48 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:16.347 22:52:48 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:16.347 22:52:48 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:16.347 22:52:48 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:16.347 22:52:48 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:16.347 22:52:48 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:16.347 22:52:48 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:16.347 22:52:48 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:16.347 22:52:48 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:16.347 22:52:48 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:16.347 22:52:48 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:16.347 22:52:48 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:16.347 22:52:48 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:16.347 22:52:48 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:16.347 22:52:48 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:16.347 22:52:48 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:16.347 22:52:48 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:16.347 22:52:48 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:16.347 22:52:48 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:16.347 22:52:48 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:16.347 22:52:48 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:16.347 22:52:48 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:16.347 22:52:48 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:16.347 22:52:48 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:16.347 22:52:48 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:16.347 22:52:48 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:16.347 22:52:48 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:16.347 22:52:48 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:16.347 22:52:48 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:16.347 22:52:48 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:16.347 22:52:48 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:16.347 22:52:48 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:16.347 22:52:48 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:16.347 22:52:48 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:16.347 22:52:48 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:16.347 22:52:48 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:16.347 22:52:48 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:16.347 22:52:48 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:16.347 22:52:48 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:16.347 22:52:48 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:16.347 22:52:48 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:16.347 22:52:48 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:16.347 22:52:48 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:16.347 22:52:48 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:16.347 22:52:48 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:16.347 22:52:48 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:16.347 22:52:48 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:16.347 22:52:48 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:16.347 22:52:48 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:16.347 22:52:48 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:16.347 22:52:48 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:16.347 22:52:48 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:16.347 22:52:48 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:16.347 22:52:48 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:16.347 22:52:48 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:16.347 22:52:48 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:16.347 22:52:48 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:16.347 22:52:48 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:16.347 22:52:48 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:16.347 22:52:48 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:16.347 22:52:48 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:16.347 22:52:48 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:16.347 22:52:48 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:16.347 22:52:48 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:16.347 22:52:48 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:16.347 22:52:48 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:16.347 22:52:48 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:16.347 22:52:48 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:16.347 22:52:48 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:16.347 22:52:48 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:16.347 22:52:48 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:16.347 22:52:48 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:16.347 22:52:48 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:16.347 22:52:48 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:16.347 22:52:48 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:16.347 22:52:48 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:16.347 22:52:48 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:16.347 22:52:48 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:16.347 22:52:48 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:16.347 22:52:48 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:16.347 22:52:48 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:16.347 22:52:48 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:16.347 22:52:48 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:16.347 22:52:48 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:16.347 22:52:48 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:16.347 22:52:48 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:16.347 22:52:48 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:16.347 #define SPDK_CONFIG_H 00:07:16.347 #define SPDK_CONFIG_APPS 1 00:07:16.347 #define SPDK_CONFIG_ARCH native 00:07:16.347 #undef SPDK_CONFIG_ASAN 00:07:16.347 #undef SPDK_CONFIG_AVAHI 00:07:16.347 #undef SPDK_CONFIG_CET 00:07:16.347 #define SPDK_CONFIG_COVERAGE 1 00:07:16.347 #define SPDK_CONFIG_CROSS_PREFIX 00:07:16.347 #undef SPDK_CONFIG_CRYPTO 00:07:16.347 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:16.347 #undef SPDK_CONFIG_CUSTOMOCF 00:07:16.347 #undef SPDK_CONFIG_DAOS 00:07:16.347 #define SPDK_CONFIG_DAOS_DIR 00:07:16.347 #define SPDK_CONFIG_DEBUG 1 00:07:16.347 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:16.347 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:16.347 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:16.347 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:16.347 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:16.347 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:16.347 #define SPDK_CONFIG_EXAMPLES 1 00:07:16.347 #undef SPDK_CONFIG_FC 00:07:16.347 #define SPDK_CONFIG_FC_PATH 00:07:16.347 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:16.347 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:16.347 #undef SPDK_CONFIG_FUSE 00:07:16.347 #undef SPDK_CONFIG_FUZZER 00:07:16.347 #define SPDK_CONFIG_FUZZER_LIB 00:07:16.347 #undef SPDK_CONFIG_GOLANG 00:07:16.347 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:16.347 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:16.347 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:16.347 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:16.347 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:16.347 #define SPDK_CONFIG_IDXD 1 00:07:16.348 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:16.348 #undef SPDK_CONFIG_IPSEC_MB 00:07:16.348 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:16.348 #define SPDK_CONFIG_ISAL 1 00:07:16.348 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:16.348 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:16.348 #define SPDK_CONFIG_LIBDIR 00:07:16.348 #undef SPDK_CONFIG_LTO 00:07:16.348 #define SPDK_CONFIG_MAX_LCORES 00:07:16.348 #define SPDK_CONFIG_NVME_CUSE 1 00:07:16.348 #undef SPDK_CONFIG_OCF 00:07:16.348 #define SPDK_CONFIG_OCF_PATH 00:07:16.348 #define SPDK_CONFIG_OPENSSL_PATH 00:07:16.348 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:16.348 #undef SPDK_CONFIG_PGO_USE 00:07:16.348 #define SPDK_CONFIG_PREFIX /usr/local 00:07:16.348 #undef SPDK_CONFIG_RAID5F 00:07:16.348 #undef SPDK_CONFIG_RBD 00:07:16.348 #define SPDK_CONFIG_RDMA 1 00:07:16.348 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:16.348 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:16.348 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:16.348 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:16.348 #define SPDK_CONFIG_SHARED 1 00:07:16.348 #undef SPDK_CONFIG_SMA 00:07:16.348 #define SPDK_CONFIG_TESTS 1 00:07:16.348 #undef SPDK_CONFIG_TSAN 00:07:16.348 #define SPDK_CONFIG_UBLK 1 00:07:16.348 #define SPDK_CONFIG_UBSAN 1 00:07:16.348 #undef SPDK_CONFIG_UNIT_TESTS 00:07:16.348 #undef SPDK_CONFIG_URING 00:07:16.348 #define SPDK_CONFIG_URING_PATH 00:07:16.348 #undef SPDK_CONFIG_URING_ZNS 00:07:16.348 #undef SPDK_CONFIG_USDT 00:07:16.348 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:16.348 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:16.348 #define SPDK_CONFIG_VFIO_USER 1 00:07:16.348 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:16.348 #define SPDK_CONFIG_VHOST 1 00:07:16.348 #define SPDK_CONFIG_VIRTIO 1 00:07:16.348 #undef SPDK_CONFIG_VTUNE 00:07:16.348 #define SPDK_CONFIG_VTUNE_DIR 00:07:16.348 #define SPDK_CONFIG_WERROR 1 00:07:16.348 #define SPDK_CONFIG_WPDK_DIR 00:07:16.348 #undef SPDK_CONFIG_XNVME 00:07:16.348 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:16.348 22:52:48 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:16.348 22:52:48 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:16.348 22:52:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.348 22:52:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.348 22:52:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.348 22:52:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.348 22:52:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.348 22:52:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.348 22:52:48 -- paths/export.sh@5 -- # export PATH 00:07:16.348 22:52:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.348 22:52:48 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:16.348 22:52:48 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:16.348 22:52:48 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:16.348 22:52:48 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:16.348 22:52:48 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:16.348 22:52:48 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:16.348 22:52:48 -- pm/common@16 -- # TEST_TAG=N/A 00:07:16.348 22:52:48 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:16.348 22:52:48 -- common/autotest_common.sh@52 -- # : 1 00:07:16.348 22:52:48 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:16.348 22:52:48 -- common/autotest_common.sh@56 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:16.348 22:52:48 -- common/autotest_common.sh@58 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:16.348 22:52:48 -- common/autotest_common.sh@60 -- # : 1 00:07:16.348 22:52:48 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:16.348 22:52:48 -- common/autotest_common.sh@62 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:16.348 22:52:48 -- common/autotest_common.sh@64 -- # : 00:07:16.348 22:52:48 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:16.348 22:52:48 -- common/autotest_common.sh@66 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:16.348 22:52:48 -- common/autotest_common.sh@68 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:16.348 22:52:48 -- common/autotest_common.sh@70 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:16.348 22:52:48 -- common/autotest_common.sh@72 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:16.348 22:52:48 -- common/autotest_common.sh@74 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:16.348 22:52:48 -- common/autotest_common.sh@76 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:16.348 22:52:48 -- common/autotest_common.sh@78 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:16.348 22:52:48 -- common/autotest_common.sh@80 -- # : 1 00:07:16.348 22:52:48 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:16.348 22:52:48 -- common/autotest_common.sh@82 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:16.348 22:52:48 -- common/autotest_common.sh@84 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:16.348 22:52:48 -- common/autotest_common.sh@86 -- # : 1 00:07:16.348 22:52:48 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:16.348 22:52:48 -- common/autotest_common.sh@88 -- # : 1 00:07:16.348 22:52:48 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:16.348 22:52:48 -- common/autotest_common.sh@90 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:16.348 22:52:48 -- common/autotest_common.sh@92 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:16.348 22:52:48 -- common/autotest_common.sh@94 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:16.348 22:52:48 -- common/autotest_common.sh@96 -- # : tcp 00:07:16.348 22:52:48 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:16.348 22:52:48 -- common/autotest_common.sh@98 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:16.348 22:52:48 -- common/autotest_common.sh@100 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:16.348 22:52:48 -- common/autotest_common.sh@102 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:16.348 22:52:48 -- common/autotest_common.sh@104 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:16.348 22:52:48 -- common/autotest_common.sh@106 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:16.348 22:52:48 -- common/autotest_common.sh@108 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:16.348 22:52:48 -- common/autotest_common.sh@110 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:16.348 22:52:48 -- common/autotest_common.sh@112 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:16.348 22:52:48 -- common/autotest_common.sh@114 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:16.348 22:52:48 -- common/autotest_common.sh@116 -- # : 1 00:07:16.348 22:52:48 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:16.348 22:52:48 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:16.348 22:52:48 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:16.348 22:52:48 -- common/autotest_common.sh@120 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:16.348 22:52:48 -- common/autotest_common.sh@122 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:16.348 22:52:48 -- common/autotest_common.sh@124 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:16.348 22:52:48 -- common/autotest_common.sh@126 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:16.348 22:52:48 -- common/autotest_common.sh@128 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:16.348 22:52:48 -- common/autotest_common.sh@130 -- # : 0 00:07:16.348 22:52:48 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:16.349 22:52:48 -- common/autotest_common.sh@132 -- # : v23.11 00:07:16.349 22:52:48 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:16.349 22:52:48 -- common/autotest_common.sh@134 -- # : true 00:07:16.349 22:52:48 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:16.349 22:52:48 -- common/autotest_common.sh@136 -- # : 0 00:07:16.349 22:52:48 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:16.349 22:52:48 -- common/autotest_common.sh@138 -- # : 0 00:07:16.349 22:52:48 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:16.349 22:52:48 -- common/autotest_common.sh@140 -- # : 0 00:07:16.349 22:52:48 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:16.349 22:52:48 -- common/autotest_common.sh@142 -- # : 0 00:07:16.349 22:52:48 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:16.349 22:52:48 -- common/autotest_common.sh@144 -- # : 0 00:07:16.349 22:52:48 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:16.349 22:52:48 -- common/autotest_common.sh@146 -- # : 0 00:07:16.349 22:52:48 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:16.349 22:52:48 -- common/autotest_common.sh@148 -- # : e810 00:07:16.349 22:52:48 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:16.349 22:52:48 -- common/autotest_common.sh@150 -- # : 0 00:07:16.349 22:52:48 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:16.349 22:52:48 -- common/autotest_common.sh@152 -- # : 0 00:07:16.349 22:52:48 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:16.349 22:52:48 -- common/autotest_common.sh@154 -- # : 0 00:07:16.349 22:52:48 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:16.349 22:52:48 -- common/autotest_common.sh@156 -- # : 0 00:07:16.349 22:52:48 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:16.349 22:52:48 -- common/autotest_common.sh@158 -- # : 0 00:07:16.349 22:52:48 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:16.349 22:52:48 -- common/autotest_common.sh@160 -- # : 0 00:07:16.349 22:52:48 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:16.349 22:52:48 -- common/autotest_common.sh@163 -- # : 00:07:16.349 22:52:48 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:16.349 22:52:48 -- common/autotest_common.sh@165 -- # : 0 00:07:16.349 22:52:48 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:16.349 22:52:48 -- common/autotest_common.sh@167 -- # : 0 00:07:16.349 22:52:48 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:16.349 22:52:48 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:16.349 22:52:48 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:16.349 22:52:48 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:16.349 22:52:48 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:16.349 22:52:48 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:16.349 22:52:48 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:16.349 22:52:48 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:16.349 22:52:48 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:16.349 22:52:48 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:16.349 22:52:48 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:16.349 22:52:48 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:16.349 22:52:48 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:16.349 22:52:48 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:16.349 22:52:48 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:16.349 22:52:48 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:16.349 22:52:48 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:16.349 22:52:48 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:16.349 22:52:48 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:16.349 22:52:48 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:16.349 22:52:48 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:16.349 22:52:48 -- common/autotest_common.sh@196 -- # cat 00:07:16.349 22:52:48 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:16.349 22:52:48 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:16.349 22:52:48 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:16.349 22:52:48 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:16.349 22:52:48 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:16.349 22:52:48 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:16.349 22:52:48 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:16.349 22:52:48 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:16.349 22:52:48 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:16.349 22:52:48 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:16.349 22:52:48 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:16.349 22:52:48 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:16.349 22:52:48 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:16.349 22:52:48 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:16.349 22:52:48 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:16.349 22:52:48 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:16.349 22:52:48 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:16.349 22:52:48 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:16.349 22:52:48 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:16.349 22:52:48 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:07:16.349 22:52:48 -- common/autotest_common.sh@249 -- # export valgrind= 00:07:16.349 22:52:48 -- common/autotest_common.sh@249 -- # valgrind= 00:07:16.349 22:52:48 -- common/autotest_common.sh@255 -- # uname -s 00:07:16.349 22:52:48 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:07:16.349 22:52:48 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:07:16.349 22:52:48 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:07:16.349 22:52:48 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:07:16.349 22:52:48 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:16.349 22:52:48 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:16.349 22:52:48 -- common/autotest_common.sh@265 -- # MAKE=make 00:07:16.349 22:52:48 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j112 00:07:16.349 22:52:48 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:07:16.349 22:52:48 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:07:16.349 22:52:48 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:16.349 22:52:48 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:07:16.349 22:52:48 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:07:16.349 22:52:48 -- common/autotest_common.sh@291 -- # for i in "$@" 00:07:16.349 22:52:48 -- common/autotest_common.sh@292 -- # case "$i" in 00:07:16.349 22:52:48 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:07:16.349 22:52:48 -- common/autotest_common.sh@309 -- # [[ -z 3052106 ]] 00:07:16.349 22:52:48 -- common/autotest_common.sh@309 -- # kill -0 3052106 00:07:16.349 22:52:48 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:07:16.349 22:52:48 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:07:16.349 22:52:48 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:07:16.349 22:52:48 -- common/autotest_common.sh@322 -- # local mount target_dir 00:07:16.349 22:52:48 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:07:16.349 22:52:48 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:07:16.349 22:52:48 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:07:16.349 22:52:48 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:07:16.350 22:52:48 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.AJc2ef 00:07:16.350 22:52:48 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:16.350 22:52:48 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:07:16.350 22:52:48 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:07:16.350 22:52:48 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.AJc2ef/tests/target /tmp/spdk.AJc2ef 00:07:16.350 22:52:48 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:07:16.350 22:52:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:16.350 22:52:48 -- common/autotest_common.sh@318 -- # df -T 00:07:16.350 22:52:48 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:07:16.350 22:52:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:07:16.350 22:52:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:07:16.350 22:52:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:07:16.350 22:52:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:07:16.350 22:52:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:07:16.350 22:52:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:16.350 22:52:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:07:16.350 22:52:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:07:16.350 22:52:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=955215872 00:07:16.350 22:52:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:07:16.350 22:52:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=4329213952 00:07:16.350 22:52:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:16.350 22:52:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:07:16.350 22:52:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:07:16.350 22:52:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=53803048960 00:07:16.350 22:52:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=61742276608 00:07:16.350 22:52:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=7939227648 00:07:16.350 22:52:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:16.350 22:52:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:16.350 22:52:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:16.350 22:52:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=30869880832 00:07:16.350 22:52:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30871138304 00:07:16.350 22:52:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:07:16.350 22:52:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:16.350 22:52:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:16.350 22:52:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:16.350 22:52:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=12339073024 00:07:16.350 22:52:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12348456960 00:07:16.350 22:52:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=9383936 00:07:16.350 22:52:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:16.350 22:52:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:16.350 22:52:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:16.350 22:52:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=30870216704 00:07:16.350 22:52:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30871138304 00:07:16.350 22:52:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=921600 00:07:16.350 22:52:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:16.350 22:52:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:16.350 22:52:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:16.350 22:52:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=6174220288 00:07:16.350 22:52:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6174224384 00:07:16.350 22:52:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:07:16.350 22:52:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:16.350 22:52:48 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:07:16.350 * Looking for test storage... 00:07:16.350 22:52:48 -- common/autotest_common.sh@359 -- # local target_space new_size 00:07:16.350 22:52:48 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:07:16.350 22:52:48 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:16.350 22:52:48 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:16.350 22:52:48 -- common/autotest_common.sh@363 -- # mount=/ 00:07:16.350 22:52:48 -- common/autotest_common.sh@365 -- # target_space=53803048960 00:07:16.350 22:52:48 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:07:16.350 22:52:48 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:07:16.350 22:52:48 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:07:16.350 22:52:48 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:07:16.350 22:52:48 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:07:16.350 22:52:48 -- common/autotest_common.sh@372 -- # new_size=10153820160 00:07:16.350 22:52:48 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:16.350 22:52:48 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:16.350 22:52:48 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:16.350 22:52:48 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:16.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:16.350 22:52:48 -- common/autotest_common.sh@380 -- # return 0 00:07:16.350 22:52:48 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:07:16.350 22:52:48 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:07:16.350 22:52:48 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:16.350 22:52:48 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:16.350 22:52:48 -- common/autotest_common.sh@1672 -- # true 00:07:16.350 22:52:48 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:07:16.350 22:52:48 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:16.350 22:52:48 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:16.350 22:52:48 -- common/autotest_common.sh@27 -- # exec 00:07:16.350 22:52:48 -- common/autotest_common.sh@29 -- # exec 00:07:16.350 22:52:48 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:16.350 22:52:48 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:16.350 22:52:48 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:16.350 22:52:48 -- common/autotest_common.sh@18 -- # set -x 00:07:16.350 22:52:48 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:16.350 22:52:48 -- nvmf/common.sh@7 -- # uname -s 00:07:16.350 22:52:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.350 22:52:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.350 22:52:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.350 22:52:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.350 22:52:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.350 22:52:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.350 22:52:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.350 22:52:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.350 22:52:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.350 22:52:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.609 22:52:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:16.609 22:52:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:16.609 22:52:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.609 22:52:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.609 22:52:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:16.609 22:52:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:16.609 22:52:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.609 22:52:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.609 22:52:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.609 22:52:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.609 22:52:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.609 22:52:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.609 22:52:48 -- paths/export.sh@5 -- # export PATH 00:07:16.609 22:52:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.609 22:52:48 -- nvmf/common.sh@46 -- # : 0 00:07:16.609 22:52:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:16.609 22:52:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:16.609 22:52:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:16.609 22:52:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.609 22:52:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.609 22:52:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:16.609 22:52:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:16.609 22:52:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:16.609 22:52:48 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:16.609 22:52:48 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:16.609 22:52:48 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:16.609 22:52:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:16.609 22:52:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:16.609 22:52:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:16.609 22:52:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:16.609 22:52:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:16.609 22:52:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.609 22:52:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:16.609 22:52:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.609 22:52:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:16.609 22:52:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:16.609 22:52:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:16.609 22:52:48 -- common/autotest_common.sh@10 -- # set +x 00:07:23.192 22:52:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:23.192 22:52:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:23.192 22:52:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:23.192 22:52:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:23.192 22:52:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:23.192 22:52:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:23.192 22:52:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:23.192 22:52:54 -- nvmf/common.sh@294 -- # net_devs=() 00:07:23.192 22:52:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:23.192 22:52:54 -- nvmf/common.sh@295 -- # e810=() 00:07:23.192 22:52:54 -- nvmf/common.sh@295 -- # local -ga e810 00:07:23.192 22:52:54 -- nvmf/common.sh@296 -- # x722=() 00:07:23.192 22:52:54 -- nvmf/common.sh@296 -- # local -ga x722 00:07:23.192 22:52:54 -- nvmf/common.sh@297 -- # mlx=() 00:07:23.192 22:52:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:23.192 22:52:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:23.192 22:52:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:23.192 22:52:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:23.192 22:52:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:23.192 22:52:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:23.192 22:52:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:23.192 22:52:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:23.192 22:52:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:23.192 22:52:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:23.192 22:52:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:23.192 22:52:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:23.192 22:52:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:23.192 22:52:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:23.192 22:52:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:23.192 22:52:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:23.192 22:52:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:23.192 22:52:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:23.192 22:52:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:23.193 22:52:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:23.193 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:23.193 22:52:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:23.193 22:52:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:23.193 22:52:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.193 22:52:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.193 22:52:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:23.193 22:52:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:23.193 22:52:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:23.193 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:23.193 22:52:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:23.193 22:52:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:23.193 22:52:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.193 22:52:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.193 22:52:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:23.193 22:52:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:23.193 22:52:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:23.193 22:52:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:23.193 22:52:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:23.193 22:52:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.193 22:52:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:23.193 22:52:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.193 22:52:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:23.193 Found net devices under 0000:af:00.0: cvl_0_0 00:07:23.193 22:52:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.193 22:52:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:23.193 22:52:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.193 22:52:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:23.193 22:52:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.193 22:52:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:23.193 Found net devices under 0000:af:00.1: cvl_0_1 00:07:23.193 22:52:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.193 22:52:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:23.193 22:52:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:23.193 22:52:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:23.193 22:52:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:23.193 22:52:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:23.193 22:52:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.193 22:52:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:23.193 22:52:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:23.193 22:52:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:23.193 22:52:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:23.193 22:52:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:23.193 22:52:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:23.193 22:52:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:23.193 22:52:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.193 22:52:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:23.193 22:52:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:23.193 22:52:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:23.193 22:52:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:23.193 22:52:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:23.193 22:52:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:23.193 22:52:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:23.193 22:52:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:23.193 22:52:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:23.193 22:52:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:23.193 22:52:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:23.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:07:23.193 00:07:23.193 --- 10.0.0.2 ping statistics --- 00:07:23.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.193 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:07:23.193 22:52:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:23.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:07:23.193 00:07:23.193 --- 10.0.0.1 ping statistics --- 00:07:23.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.193 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:07:23.193 22:52:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.193 22:52:54 -- nvmf/common.sh@410 -- # return 0 00:07:23.193 22:52:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:23.193 22:52:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.193 22:52:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:23.193 22:52:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:23.193 22:52:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.193 22:52:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:23.193 22:52:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:23.193 22:52:54 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:23.193 22:52:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:23.193 22:52:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.193 22:52:54 -- common/autotest_common.sh@10 -- # set +x 00:07:23.193 ************************************ 00:07:23.193 START TEST nvmf_filesystem_no_in_capsule 00:07:23.193 ************************************ 00:07:23.193 22:52:54 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:07:23.193 22:52:54 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:23.193 22:52:54 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:23.193 22:52:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:23.193 22:52:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:23.193 22:52:54 -- common/autotest_common.sh@10 -- # set +x 00:07:23.193 22:52:54 -- nvmf/common.sh@469 -- # nvmfpid=3055255 00:07:23.193 22:52:54 -- nvmf/common.sh@470 -- # waitforlisten 3055255 00:07:23.193 22:52:54 -- common/autotest_common.sh@819 -- # '[' -z 3055255 ']' 00:07:23.193 22:52:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.193 22:52:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:23.193 22:52:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.193 22:52:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:23.193 22:52:54 -- common/autotest_common.sh@10 -- # set +x 00:07:23.193 22:52:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:23.193 [2024-07-24 22:52:55.034207] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:23.193 [2024-07-24 22:52:55.034256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.193 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.193 [2024-07-24 22:52:55.109580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.193 [2024-07-24 22:52:55.150215] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:23.193 [2024-07-24 22:52:55.150328] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:23.193 [2024-07-24 22:52:55.150338] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:23.193 [2024-07-24 22:52:55.150348] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:23.193 [2024-07-24 22:52:55.150396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.193 [2024-07-24 22:52:55.150415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.193 [2024-07-24 22:52:55.150505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.193 [2024-07-24 22:52:55.150506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.491 22:52:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:23.491 22:52:55 -- common/autotest_common.sh@852 -- # return 0 00:07:23.491 22:52:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:23.491 22:52:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:23.491 22:52:55 -- common/autotest_common.sh@10 -- # set +x 00:07:23.491 22:52:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.491 22:52:55 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:23.491 22:52:55 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:23.491 22:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:23.491 22:52:55 -- common/autotest_common.sh@10 -- # set +x 00:07:23.491 [2024-07-24 22:52:55.879026] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.491 22:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:23.491 22:52:55 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:23.491 22:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:23.491 22:52:55 -- common/autotest_common.sh@10 -- # set +x 00:07:23.750 Malloc1 00:07:23.750 22:52:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:23.750 22:52:56 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:23.750 22:52:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:23.750 22:52:56 -- common/autotest_common.sh@10 -- # set +x 00:07:23.750 22:52:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:23.750 22:52:56 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:23.750 22:52:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:23.750 22:52:56 -- common/autotest_common.sh@10 -- # set +x 00:07:23.750 22:52:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:23.750 22:52:56 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.750 22:52:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:23.750 22:52:56 -- common/autotest_common.sh@10 -- # set +x 00:07:23.750 [2024-07-24 22:52:56.031908] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.750 22:52:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:23.750 22:52:56 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:23.750 22:52:56 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:23.750 22:52:56 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:23.750 22:52:56 -- common/autotest_common.sh@1359 -- # local bs 00:07:23.750 22:52:56 -- common/autotest_common.sh@1360 -- # local nb 00:07:23.750 22:52:56 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:23.750 22:52:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:23.750 22:52:56 -- common/autotest_common.sh@10 -- # set +x 00:07:23.750 22:52:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:23.750 22:52:56 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:23.750 { 00:07:23.750 "name": "Malloc1", 00:07:23.750 "aliases": [ 00:07:23.750 "b43167c5-e2c8-4360-aa51-dc9697888307" 00:07:23.750 ], 00:07:23.750 "product_name": "Malloc disk", 00:07:23.750 "block_size": 512, 00:07:23.750 "num_blocks": 1048576, 00:07:23.750 "uuid": "b43167c5-e2c8-4360-aa51-dc9697888307", 00:07:23.750 "assigned_rate_limits": { 00:07:23.750 "rw_ios_per_sec": 0, 00:07:23.750 "rw_mbytes_per_sec": 0, 00:07:23.750 "r_mbytes_per_sec": 0, 00:07:23.750 "w_mbytes_per_sec": 0 00:07:23.750 }, 00:07:23.750 "claimed": true, 00:07:23.750 "claim_type": "exclusive_write", 00:07:23.750 "zoned": false, 00:07:23.750 "supported_io_types": { 00:07:23.750 "read": true, 00:07:23.750 "write": true, 00:07:23.750 "unmap": true, 00:07:23.750 "write_zeroes": true, 00:07:23.750 "flush": true, 00:07:23.750 "reset": true, 00:07:23.750 "compare": false, 00:07:23.750 "compare_and_write": false, 00:07:23.750 "abort": true, 00:07:23.750 "nvme_admin": false, 00:07:23.750 "nvme_io": false 00:07:23.750 }, 00:07:23.750 "memory_domains": [ 00:07:23.750 { 00:07:23.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.750 "dma_device_type": 2 00:07:23.750 } 00:07:23.750 ], 00:07:23.750 "driver_specific": {} 00:07:23.750 } 00:07:23.750 ]' 00:07:23.750 22:52:56 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:23.750 22:52:56 -- common/autotest_common.sh@1362 -- # bs=512 00:07:23.750 22:52:56 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:23.750 22:52:56 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:23.750 22:52:56 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:23.750 22:52:56 -- common/autotest_common.sh@1367 -- # echo 512 00:07:23.750 22:52:56 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:23.750 22:52:56 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:25.130 22:52:57 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:25.130 22:52:57 -- common/autotest_common.sh@1177 -- # local i=0 00:07:25.130 22:52:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:25.130 22:52:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:25.130 22:52:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:27.037 22:52:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:27.037 22:52:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:27.037 22:52:59 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:27.037 22:52:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:27.296 22:52:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:27.296 22:52:59 -- common/autotest_common.sh@1187 -- # return 0 00:07:27.296 22:52:59 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:27.296 22:52:59 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:27.296 22:52:59 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:27.296 22:52:59 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:27.296 22:52:59 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:27.296 22:52:59 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:27.296 22:52:59 -- setup/common.sh@80 -- # echo 536870912 00:07:27.296 22:52:59 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:27.296 22:52:59 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:27.296 22:52:59 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:27.296 22:52:59 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:27.555 22:52:59 -- target/filesystem.sh@69 -- # partprobe 00:07:28.124 22:53:00 -- target/filesystem.sh@70 -- # sleep 1 00:07:29.503 22:53:01 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:29.503 22:53:01 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:29.503 22:53:01 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:29.503 22:53:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.503 22:53:01 -- common/autotest_common.sh@10 -- # set +x 00:07:29.503 ************************************ 00:07:29.503 START TEST filesystem_ext4 00:07:29.503 ************************************ 00:07:29.503 22:53:01 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:29.503 22:53:01 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:29.503 22:53:01 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:29.503 22:53:01 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:29.503 22:53:01 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:29.503 22:53:01 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:29.503 22:53:01 -- common/autotest_common.sh@904 -- # local i=0 00:07:29.503 22:53:01 -- common/autotest_common.sh@905 -- # local force 00:07:29.503 22:53:01 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:29.503 22:53:01 -- common/autotest_common.sh@908 -- # force=-F 00:07:29.503 22:53:01 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:29.503 mke2fs 1.46.5 (30-Dec-2021) 00:07:29.503 Discarding device blocks: 0/522240 done 00:07:29.503 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:29.503 Filesystem UUID: 37b39acb-cba2-4b1a-a073-1e6ffaa0a9b9 00:07:29.503 Superblock backups stored on blocks: 00:07:29.503 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:29.503 00:07:29.503 Allocating group tables: 0/64 done 00:07:29.503 Writing inode tables: 0/64 done 00:07:29.503 Creating journal (8192 blocks): done 00:07:30.329 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:07:30.330 00:07:30.330 22:53:02 -- common/autotest_common.sh@921 -- # return 0 00:07:30.330 22:53:02 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:30.897 22:53:03 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:30.897 22:53:03 -- target/filesystem.sh@25 -- # sync 00:07:30.897 22:53:03 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:30.897 22:53:03 -- target/filesystem.sh@27 -- # sync 00:07:30.897 22:53:03 -- target/filesystem.sh@29 -- # i=0 00:07:30.897 22:53:03 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:30.897 22:53:03 -- target/filesystem.sh@37 -- # kill -0 3055255 00:07:30.897 22:53:03 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:30.897 22:53:03 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:30.897 22:53:03 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:30.897 22:53:03 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:30.897 00:07:30.897 real 0m1.717s 00:07:30.897 user 0m0.025s 00:07:30.897 sys 0m0.080s 00:07:30.897 22:53:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.897 22:53:03 -- common/autotest_common.sh@10 -- # set +x 00:07:30.897 ************************************ 00:07:30.897 END TEST filesystem_ext4 00:07:30.897 ************************************ 00:07:30.897 22:53:03 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:30.897 22:53:03 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:30.897 22:53:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:30.897 22:53:03 -- common/autotest_common.sh@10 -- # set +x 00:07:30.897 ************************************ 00:07:30.897 START TEST filesystem_btrfs 00:07:30.897 ************************************ 00:07:30.897 22:53:03 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:30.897 22:53:03 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:30.897 22:53:03 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:30.897 22:53:03 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:30.897 22:53:03 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:30.897 22:53:03 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:30.897 22:53:03 -- common/autotest_common.sh@904 -- # local i=0 00:07:30.897 22:53:03 -- common/autotest_common.sh@905 -- # local force 00:07:30.897 22:53:03 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:30.897 22:53:03 -- common/autotest_common.sh@910 -- # force=-f 00:07:30.897 22:53:03 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:31.155 btrfs-progs v6.6.2 00:07:31.155 See https://btrfs.readthedocs.io for more information. 00:07:31.155 00:07:31.155 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:31.155 NOTE: several default settings have changed in version 5.15, please make sure 00:07:31.155 this does not affect your deployments: 00:07:31.155 - DUP for metadata (-m dup) 00:07:31.155 - enabled no-holes (-O no-holes) 00:07:31.155 - enabled free-space-tree (-R free-space-tree) 00:07:31.155 00:07:31.155 Label: (null) 00:07:31.155 UUID: c5c310db-7aff-4344-830f-dba8b38d60a2 00:07:31.155 Node size: 16384 00:07:31.155 Sector size: 4096 00:07:31.155 Filesystem size: 510.00MiB 00:07:31.155 Block group profiles: 00:07:31.155 Data: single 8.00MiB 00:07:31.155 Metadata: DUP 32.00MiB 00:07:31.155 System: DUP 8.00MiB 00:07:31.155 SSD detected: yes 00:07:31.155 Zoned device: no 00:07:31.155 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:31.155 Runtime features: free-space-tree 00:07:31.155 Checksum: crc32c 00:07:31.155 Number of devices: 1 00:07:31.155 Devices: 00:07:31.155 ID SIZE PATH 00:07:31.155 1 510.00MiB /dev/nvme0n1p1 00:07:31.155 00:07:31.155 22:53:03 -- common/autotest_common.sh@921 -- # return 0 00:07:31.155 22:53:03 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:32.105 22:53:04 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:32.105 22:53:04 -- target/filesystem.sh@25 -- # sync 00:07:32.105 22:53:04 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:32.105 22:53:04 -- target/filesystem.sh@27 -- # sync 00:07:32.105 22:53:04 -- target/filesystem.sh@29 -- # i=0 00:07:32.105 22:53:04 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:32.105 22:53:04 -- target/filesystem.sh@37 -- # kill -0 3055255 00:07:32.105 22:53:04 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:32.105 22:53:04 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:32.105 22:53:04 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:32.105 22:53:04 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:32.105 00:07:32.105 real 0m1.245s 00:07:32.105 user 0m0.028s 00:07:32.105 sys 0m0.144s 00:07:32.105 22:53:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.105 22:53:04 -- common/autotest_common.sh@10 -- # set +x 00:07:32.105 ************************************ 00:07:32.105 END TEST filesystem_btrfs 00:07:32.105 ************************************ 00:07:32.364 22:53:04 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:32.364 22:53:04 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:32.364 22:53:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.364 22:53:04 -- common/autotest_common.sh@10 -- # set +x 00:07:32.364 ************************************ 00:07:32.364 START TEST filesystem_xfs 00:07:32.364 ************************************ 00:07:32.364 22:53:04 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:32.364 22:53:04 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:32.364 22:53:04 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:32.364 22:53:04 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:32.364 22:53:04 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:32.364 22:53:04 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:32.364 22:53:04 -- common/autotest_common.sh@904 -- # local i=0 00:07:32.364 22:53:04 -- common/autotest_common.sh@905 -- # local force 00:07:32.364 22:53:04 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:32.364 22:53:04 -- common/autotest_common.sh@910 -- # force=-f 00:07:32.364 22:53:04 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:32.364 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:32.364 = sectsz=512 attr=2, projid32bit=1 00:07:32.364 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:32.365 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:32.365 data = bsize=4096 blocks=130560, imaxpct=25 00:07:32.365 = sunit=0 swidth=0 blks 00:07:32.365 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:32.365 log =internal log bsize=4096 blocks=16384, version=2 00:07:32.365 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:32.365 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:33.302 Discarding blocks...Done. 00:07:33.302 22:53:05 -- common/autotest_common.sh@921 -- # return 0 00:07:33.302 22:53:05 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:35.209 22:53:07 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:35.209 22:53:07 -- target/filesystem.sh@25 -- # sync 00:07:35.209 22:53:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:35.209 22:53:07 -- target/filesystem.sh@27 -- # sync 00:07:35.209 22:53:07 -- target/filesystem.sh@29 -- # i=0 00:07:35.209 22:53:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:35.209 22:53:07 -- target/filesystem.sh@37 -- # kill -0 3055255 00:07:35.209 22:53:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:35.209 22:53:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:35.209 22:53:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:35.209 22:53:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:35.209 00:07:35.209 real 0m2.689s 00:07:35.209 user 0m0.027s 00:07:35.209 sys 0m0.085s 00:07:35.209 22:53:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.209 22:53:07 -- common/autotest_common.sh@10 -- # set +x 00:07:35.209 ************************************ 00:07:35.209 END TEST filesystem_xfs 00:07:35.209 ************************************ 00:07:35.209 22:53:07 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:35.209 22:53:07 -- target/filesystem.sh@93 -- # sync 00:07:35.209 22:53:07 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:35.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:35.209 22:53:07 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:35.209 22:53:07 -- common/autotest_common.sh@1198 -- # local i=0 00:07:35.209 22:53:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:35.209 22:53:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:35.209 22:53:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:35.209 22:53:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:35.209 22:53:07 -- common/autotest_common.sh@1210 -- # return 0 00:07:35.209 22:53:07 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:35.209 22:53:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.209 22:53:07 -- common/autotest_common.sh@10 -- # set +x 00:07:35.209 22:53:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.209 22:53:07 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:35.209 22:53:07 -- target/filesystem.sh@101 -- # killprocess 3055255 00:07:35.209 22:53:07 -- common/autotest_common.sh@926 -- # '[' -z 3055255 ']' 00:07:35.209 22:53:07 -- common/autotest_common.sh@930 -- # kill -0 3055255 00:07:35.209 22:53:07 -- common/autotest_common.sh@931 -- # uname 00:07:35.209 22:53:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:35.209 22:53:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3055255 00:07:35.209 22:53:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:35.209 22:53:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:35.209 22:53:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3055255' 00:07:35.209 killing process with pid 3055255 00:07:35.209 22:53:07 -- common/autotest_common.sh@945 -- # kill 3055255 00:07:35.209 22:53:07 -- common/autotest_common.sh@950 -- # wait 3055255 00:07:35.778 22:53:07 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:35.778 00:07:35.778 real 0m12.924s 00:07:35.778 user 0m50.411s 00:07:35.778 sys 0m1.826s 00:07:35.778 22:53:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.778 22:53:07 -- common/autotest_common.sh@10 -- # set +x 00:07:35.778 ************************************ 00:07:35.778 END TEST nvmf_filesystem_no_in_capsule 00:07:35.778 ************************************ 00:07:35.778 22:53:07 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:35.778 22:53:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:35.778 22:53:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:35.778 22:53:07 -- common/autotest_common.sh@10 -- # set +x 00:07:35.778 ************************************ 00:07:35.778 START TEST nvmf_filesystem_in_capsule 00:07:35.778 ************************************ 00:07:35.778 22:53:07 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:07:35.778 22:53:07 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:35.778 22:53:07 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:35.778 22:53:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:35.778 22:53:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:35.778 22:53:07 -- common/autotest_common.sh@10 -- # set +x 00:07:35.778 22:53:07 -- nvmf/common.sh@469 -- # nvmfpid=3057830 00:07:35.778 22:53:07 -- nvmf/common.sh@470 -- # waitforlisten 3057830 00:07:35.778 22:53:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:35.778 22:53:07 -- common/autotest_common.sh@819 -- # '[' -z 3057830 ']' 00:07:35.778 22:53:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.778 22:53:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:35.778 22:53:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.778 22:53:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:35.778 22:53:07 -- common/autotest_common.sh@10 -- # set +x 00:07:35.778 [2024-07-24 22:53:08.016489] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:35.778 [2024-07-24 22:53:08.016543] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.778 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.778 [2024-07-24 22:53:08.088878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.778 [2024-07-24 22:53:08.124709] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:35.778 [2024-07-24 22:53:08.124862] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.778 [2024-07-24 22:53:08.124872] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.778 [2024-07-24 22:53:08.124881] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.778 [2024-07-24 22:53:08.124928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.778 [2024-07-24 22:53:08.125037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.778 [2024-07-24 22:53:08.125102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.778 [2024-07-24 22:53:08.125103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.716 22:53:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:36.716 22:53:08 -- common/autotest_common.sh@852 -- # return 0 00:07:36.716 22:53:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:36.716 22:53:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:36.716 22:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:36.716 22:53:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.716 22:53:08 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:36.716 22:53:08 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:36.716 22:53:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.716 22:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:36.716 [2024-07-24 22:53:08.849088] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.716 22:53:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.716 22:53:08 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:36.716 22:53:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.716 22:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:36.716 Malloc1 00:07:36.716 22:53:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.716 22:53:08 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:36.716 22:53:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.716 22:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:36.716 22:53:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.716 22:53:08 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:36.716 22:53:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.716 22:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:36.716 22:53:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.716 22:53:08 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.716 22:53:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.716 22:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:36.716 [2024-07-24 22:53:08.997531] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.716 22:53:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.716 22:53:09 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:36.716 22:53:09 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:36.716 22:53:09 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:36.716 22:53:09 -- common/autotest_common.sh@1359 -- # local bs 00:07:36.716 22:53:09 -- common/autotest_common.sh@1360 -- # local nb 00:07:36.716 22:53:09 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:36.716 22:53:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.716 22:53:09 -- common/autotest_common.sh@10 -- # set +x 00:07:36.716 22:53:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.716 22:53:09 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:36.716 { 00:07:36.716 "name": "Malloc1", 00:07:36.716 "aliases": [ 00:07:36.716 "07f735be-6677-485b-bfb3-8013c111377d" 00:07:36.716 ], 00:07:36.716 "product_name": "Malloc disk", 00:07:36.716 "block_size": 512, 00:07:36.716 "num_blocks": 1048576, 00:07:36.716 "uuid": "07f735be-6677-485b-bfb3-8013c111377d", 00:07:36.716 "assigned_rate_limits": { 00:07:36.716 "rw_ios_per_sec": 0, 00:07:36.716 "rw_mbytes_per_sec": 0, 00:07:36.716 "r_mbytes_per_sec": 0, 00:07:36.716 "w_mbytes_per_sec": 0 00:07:36.716 }, 00:07:36.716 "claimed": true, 00:07:36.716 "claim_type": "exclusive_write", 00:07:36.716 "zoned": false, 00:07:36.716 "supported_io_types": { 00:07:36.716 "read": true, 00:07:36.716 "write": true, 00:07:36.716 "unmap": true, 00:07:36.716 "write_zeroes": true, 00:07:36.716 "flush": true, 00:07:36.716 "reset": true, 00:07:36.716 "compare": false, 00:07:36.716 "compare_and_write": false, 00:07:36.716 "abort": true, 00:07:36.716 "nvme_admin": false, 00:07:36.716 "nvme_io": false 00:07:36.716 }, 00:07:36.716 "memory_domains": [ 00:07:36.716 { 00:07:36.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.716 "dma_device_type": 2 00:07:36.716 } 00:07:36.716 ], 00:07:36.716 "driver_specific": {} 00:07:36.716 } 00:07:36.716 ]' 00:07:36.716 22:53:09 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:36.716 22:53:09 -- common/autotest_common.sh@1362 -- # bs=512 00:07:36.716 22:53:09 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:36.716 22:53:09 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:36.716 22:53:09 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:36.716 22:53:09 -- common/autotest_common.sh@1367 -- # echo 512 00:07:36.716 22:53:09 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:36.716 22:53:09 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:38.096 22:53:10 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:38.096 22:53:10 -- common/autotest_common.sh@1177 -- # local i=0 00:07:38.096 22:53:10 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:38.096 22:53:10 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:38.096 22:53:10 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:40.634 22:53:12 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:40.634 22:53:12 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:40.634 22:53:12 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:40.634 22:53:12 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:40.634 22:53:12 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:40.634 22:53:12 -- common/autotest_common.sh@1187 -- # return 0 00:07:40.634 22:53:12 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:40.634 22:53:12 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:40.634 22:53:12 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:40.634 22:53:12 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:40.634 22:53:12 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:40.634 22:53:12 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:40.634 22:53:12 -- setup/common.sh@80 -- # echo 536870912 00:07:40.634 22:53:12 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:40.634 22:53:12 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:40.634 22:53:12 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:40.634 22:53:12 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:40.634 22:53:12 -- target/filesystem.sh@69 -- # partprobe 00:07:40.634 22:53:12 -- target/filesystem.sh@70 -- # sleep 1 00:07:41.617 22:53:13 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:41.617 22:53:13 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:41.617 22:53:13 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:41.617 22:53:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.617 22:53:13 -- common/autotest_common.sh@10 -- # set +x 00:07:41.617 ************************************ 00:07:41.617 START TEST filesystem_in_capsule_ext4 00:07:41.617 ************************************ 00:07:41.617 22:53:13 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:41.617 22:53:13 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:41.617 22:53:13 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.617 22:53:13 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:41.617 22:53:13 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:41.617 22:53:13 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:41.617 22:53:13 -- common/autotest_common.sh@904 -- # local i=0 00:07:41.617 22:53:13 -- common/autotest_common.sh@905 -- # local force 00:07:41.617 22:53:13 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:41.617 22:53:13 -- common/autotest_common.sh@908 -- # force=-F 00:07:41.617 22:53:13 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:41.617 mke2fs 1.46.5 (30-Dec-2021) 00:07:41.617 Discarding device blocks: 0/522240 done 00:07:41.617 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:41.617 Filesystem UUID: 79240474-f827-457d-921b-9ec384331b70 00:07:41.617 Superblock backups stored on blocks: 00:07:41.617 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:41.617 00:07:41.617 Allocating group tables: 0/64 done 00:07:41.617 Writing inode tables: 0/64 done 00:07:42.555 Creating journal (8192 blocks): done 00:07:42.555 Writing superblocks and filesystem accounting information: 0/64 done 00:07:42.555 00:07:42.555 22:53:14 -- common/autotest_common.sh@921 -- # return 0 00:07:42.555 22:53:14 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:42.555 22:53:14 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:42.555 22:53:14 -- target/filesystem.sh@25 -- # sync 00:07:42.555 22:53:14 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:42.555 22:53:14 -- target/filesystem.sh@27 -- # sync 00:07:42.555 22:53:14 -- target/filesystem.sh@29 -- # i=0 00:07:42.555 22:53:14 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:42.555 22:53:14 -- target/filesystem.sh@37 -- # kill -0 3057830 00:07:42.555 22:53:14 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:42.555 22:53:14 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:42.555 22:53:14 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:42.556 22:53:14 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:42.815 00:07:42.815 real 0m1.109s 00:07:42.815 user 0m0.037s 00:07:42.815 sys 0m0.070s 00:07:42.815 22:53:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.815 22:53:14 -- common/autotest_common.sh@10 -- # set +x 00:07:42.815 ************************************ 00:07:42.815 END TEST filesystem_in_capsule_ext4 00:07:42.815 ************************************ 00:07:42.815 22:53:15 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:42.815 22:53:15 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:42.815 22:53:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.815 22:53:15 -- common/autotest_common.sh@10 -- # set +x 00:07:42.815 ************************************ 00:07:42.815 START TEST filesystem_in_capsule_btrfs 00:07:42.815 ************************************ 00:07:42.815 22:53:15 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:42.815 22:53:15 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:42.815 22:53:15 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:42.815 22:53:15 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:42.815 22:53:15 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:42.815 22:53:15 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:42.815 22:53:15 -- common/autotest_common.sh@904 -- # local i=0 00:07:42.815 22:53:15 -- common/autotest_common.sh@905 -- # local force 00:07:42.815 22:53:15 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:42.815 22:53:15 -- common/autotest_common.sh@910 -- # force=-f 00:07:42.815 22:53:15 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:43.074 btrfs-progs v6.6.2 00:07:43.074 See https://btrfs.readthedocs.io for more information. 00:07:43.074 00:07:43.074 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:43.074 NOTE: several default settings have changed in version 5.15, please make sure 00:07:43.074 this does not affect your deployments: 00:07:43.074 - DUP for metadata (-m dup) 00:07:43.074 - enabled no-holes (-O no-holes) 00:07:43.074 - enabled free-space-tree (-R free-space-tree) 00:07:43.074 00:07:43.074 Label: (null) 00:07:43.074 UUID: d9e7bb33-dfc3-4ad4-be9a-03fe14fb73cb 00:07:43.074 Node size: 16384 00:07:43.074 Sector size: 4096 00:07:43.074 Filesystem size: 510.00MiB 00:07:43.074 Block group profiles: 00:07:43.074 Data: single 8.00MiB 00:07:43.074 Metadata: DUP 32.00MiB 00:07:43.074 System: DUP 8.00MiB 00:07:43.074 SSD detected: yes 00:07:43.074 Zoned device: no 00:07:43.074 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:43.074 Runtime features: free-space-tree 00:07:43.074 Checksum: crc32c 00:07:43.074 Number of devices: 1 00:07:43.074 Devices: 00:07:43.074 ID SIZE PATH 00:07:43.074 1 510.00MiB /dev/nvme0n1p1 00:07:43.074 00:07:43.074 22:53:15 -- common/autotest_common.sh@921 -- # return 0 00:07:43.074 22:53:15 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:43.333 22:53:15 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:43.593 22:53:15 -- target/filesystem.sh@25 -- # sync 00:07:43.593 22:53:15 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:43.593 22:53:15 -- target/filesystem.sh@27 -- # sync 00:07:43.593 22:53:15 -- target/filesystem.sh@29 -- # i=0 00:07:43.593 22:53:15 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:43.593 22:53:15 -- target/filesystem.sh@37 -- # kill -0 3057830 00:07:43.593 22:53:15 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:43.593 22:53:15 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:43.593 22:53:15 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:43.593 22:53:15 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:43.593 00:07:43.593 real 0m0.794s 00:07:43.593 user 0m0.032s 00:07:43.593 sys 0m0.140s 00:07:43.593 22:53:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.593 22:53:15 -- common/autotest_common.sh@10 -- # set +x 00:07:43.593 ************************************ 00:07:43.593 END TEST filesystem_in_capsule_btrfs 00:07:43.593 ************************************ 00:07:43.593 22:53:15 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:43.593 22:53:15 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:43.593 22:53:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.593 22:53:15 -- common/autotest_common.sh@10 -- # set +x 00:07:43.593 ************************************ 00:07:43.593 START TEST filesystem_in_capsule_xfs 00:07:43.593 ************************************ 00:07:43.593 22:53:15 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:43.593 22:53:15 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:43.593 22:53:15 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:43.593 22:53:15 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:43.593 22:53:15 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:43.593 22:53:15 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:43.593 22:53:15 -- common/autotest_common.sh@904 -- # local i=0 00:07:43.593 22:53:15 -- common/autotest_common.sh@905 -- # local force 00:07:43.593 22:53:15 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:43.593 22:53:15 -- common/autotest_common.sh@910 -- # force=-f 00:07:43.593 22:53:15 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:43.593 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:43.593 = sectsz=512 attr=2, projid32bit=1 00:07:43.593 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:43.593 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:43.593 data = bsize=4096 blocks=130560, imaxpct=25 00:07:43.593 = sunit=0 swidth=0 blks 00:07:43.593 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:43.593 log =internal log bsize=4096 blocks=16384, version=2 00:07:43.593 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:43.593 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:44.531 Discarding blocks...Done. 00:07:44.531 22:53:16 -- common/autotest_common.sh@921 -- # return 0 00:07:44.531 22:53:16 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:46.438 22:53:18 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:46.438 22:53:18 -- target/filesystem.sh@25 -- # sync 00:07:46.438 22:53:18 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:46.438 22:53:18 -- target/filesystem.sh@27 -- # sync 00:07:46.438 22:53:18 -- target/filesystem.sh@29 -- # i=0 00:07:46.438 22:53:18 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:46.438 22:53:18 -- target/filesystem.sh@37 -- # kill -0 3057830 00:07:46.438 22:53:18 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:46.438 22:53:18 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:46.438 22:53:18 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:46.438 22:53:18 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:46.438 00:07:46.438 real 0m2.953s 00:07:46.438 user 0m0.027s 00:07:46.438 sys 0m0.084s 00:07:46.438 22:53:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.438 22:53:18 -- common/autotest_common.sh@10 -- # set +x 00:07:46.438 ************************************ 00:07:46.438 END TEST filesystem_in_capsule_xfs 00:07:46.438 ************************************ 00:07:46.697 22:53:18 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:46.957 22:53:19 -- target/filesystem.sh@93 -- # sync 00:07:46.957 22:53:19 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:46.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.957 22:53:19 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:46.957 22:53:19 -- common/autotest_common.sh@1198 -- # local i=0 00:07:46.957 22:53:19 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:46.957 22:53:19 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:46.957 22:53:19 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:46.957 22:53:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:47.216 22:53:19 -- common/autotest_common.sh@1210 -- # return 0 00:07:47.216 22:53:19 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:47.216 22:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:47.216 22:53:19 -- common/autotest_common.sh@10 -- # set +x 00:07:47.216 22:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:47.216 22:53:19 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:47.216 22:53:19 -- target/filesystem.sh@101 -- # killprocess 3057830 00:07:47.216 22:53:19 -- common/autotest_common.sh@926 -- # '[' -z 3057830 ']' 00:07:47.216 22:53:19 -- common/autotest_common.sh@930 -- # kill -0 3057830 00:07:47.216 22:53:19 -- common/autotest_common.sh@931 -- # uname 00:07:47.216 22:53:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:47.216 22:53:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3057830 00:07:47.216 22:53:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:47.216 22:53:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:47.216 22:53:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3057830' 00:07:47.216 killing process with pid 3057830 00:07:47.216 22:53:19 -- common/autotest_common.sh@945 -- # kill 3057830 00:07:47.216 22:53:19 -- common/autotest_common.sh@950 -- # wait 3057830 00:07:47.475 22:53:19 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:47.475 00:07:47.475 real 0m11.836s 00:07:47.475 user 0m46.233s 00:07:47.475 sys 0m1.651s 00:07:47.475 22:53:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.475 22:53:19 -- common/autotest_common.sh@10 -- # set +x 00:07:47.475 ************************************ 00:07:47.475 END TEST nvmf_filesystem_in_capsule 00:07:47.475 ************************************ 00:07:47.475 22:53:19 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:47.475 22:53:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:47.475 22:53:19 -- nvmf/common.sh@116 -- # sync 00:07:47.475 22:53:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:47.475 22:53:19 -- nvmf/common.sh@119 -- # set +e 00:07:47.475 22:53:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:47.475 22:53:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:47.475 rmmod nvme_tcp 00:07:47.475 rmmod nvme_fabrics 00:07:47.475 rmmod nvme_keyring 00:07:47.475 22:53:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:47.475 22:53:19 -- nvmf/common.sh@123 -- # set -e 00:07:47.475 22:53:19 -- nvmf/common.sh@124 -- # return 0 00:07:47.475 22:53:19 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:07:47.475 22:53:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:47.475 22:53:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:47.475 22:53:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:47.475 22:53:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:47.475 22:53:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:47.475 22:53:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.475 22:53:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.475 22:53:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.010 22:53:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:50.010 00:07:50.010 real 0m33.424s 00:07:50.010 user 1m38.311s 00:07:50.010 sys 0m8.309s 00:07:50.010 22:53:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.010 22:53:21 -- common/autotest_common.sh@10 -- # set +x 00:07:50.010 ************************************ 00:07:50.010 END TEST nvmf_filesystem 00:07:50.010 ************************************ 00:07:50.010 22:53:22 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:50.010 22:53:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:50.010 22:53:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:50.010 22:53:22 -- common/autotest_common.sh@10 -- # set +x 00:07:50.010 ************************************ 00:07:50.010 START TEST nvmf_discovery 00:07:50.010 ************************************ 00:07:50.010 22:53:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:50.010 * Looking for test storage... 00:07:50.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.010 22:53:22 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.010 22:53:22 -- nvmf/common.sh@7 -- # uname -s 00:07:50.010 22:53:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.010 22:53:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.010 22:53:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.010 22:53:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.010 22:53:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.010 22:53:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.010 22:53:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.010 22:53:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.010 22:53:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.010 22:53:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.010 22:53:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:50.010 22:53:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:50.010 22:53:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.010 22:53:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.010 22:53:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.010 22:53:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:50.010 22:53:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.010 22:53:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.010 22:53:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.010 22:53:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.010 22:53:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.010 22:53:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.010 22:53:22 -- paths/export.sh@5 -- # export PATH 00:07:50.010 22:53:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.010 22:53:22 -- nvmf/common.sh@46 -- # : 0 00:07:50.010 22:53:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:50.010 22:53:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:50.010 22:53:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:50.010 22:53:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.010 22:53:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.010 22:53:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:50.010 22:53:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:50.010 22:53:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:50.010 22:53:22 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:50.010 22:53:22 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:50.010 22:53:22 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:50.010 22:53:22 -- target/discovery.sh@15 -- # hash nvme 00:07:50.010 22:53:22 -- target/discovery.sh@20 -- # nvmftestinit 00:07:50.010 22:53:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:50.010 22:53:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.010 22:53:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:50.010 22:53:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:50.010 22:53:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:50.010 22:53:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.010 22:53:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.010 22:53:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.010 22:53:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:50.010 22:53:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:50.010 22:53:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:50.010 22:53:22 -- common/autotest_common.sh@10 -- # set +x 00:07:56.582 22:53:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:56.582 22:53:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:56.582 22:53:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:56.582 22:53:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:56.582 22:53:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:56.582 22:53:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:56.582 22:53:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:56.582 22:53:28 -- nvmf/common.sh@294 -- # net_devs=() 00:07:56.582 22:53:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:56.582 22:53:28 -- nvmf/common.sh@295 -- # e810=() 00:07:56.582 22:53:28 -- nvmf/common.sh@295 -- # local -ga e810 00:07:56.582 22:53:28 -- nvmf/common.sh@296 -- # x722=() 00:07:56.582 22:53:28 -- nvmf/common.sh@296 -- # local -ga x722 00:07:56.582 22:53:28 -- nvmf/common.sh@297 -- # mlx=() 00:07:56.582 22:53:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:56.582 22:53:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.582 22:53:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.582 22:53:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.582 22:53:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.582 22:53:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.582 22:53:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.582 22:53:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.582 22:53:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.582 22:53:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.582 22:53:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.582 22:53:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.582 22:53:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:56.582 22:53:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:56.582 22:53:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:56.582 22:53:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:56.582 22:53:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:56.582 22:53:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:56.582 22:53:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:56.582 22:53:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:56.582 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:56.582 22:53:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:56.582 22:53:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:56.582 22:53:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.582 22:53:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.582 22:53:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:56.582 22:53:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:56.582 22:53:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:56.582 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:56.582 22:53:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:56.582 22:53:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:56.582 22:53:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.582 22:53:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.582 22:53:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:56.582 22:53:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:56.582 22:53:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:56.582 22:53:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:56.582 22:53:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:56.582 22:53:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.582 22:53:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:56.582 22:53:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.582 22:53:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:56.582 Found net devices under 0000:af:00.0: cvl_0_0 00:07:56.582 22:53:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.582 22:53:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:56.582 22:53:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.582 22:53:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:56.582 22:53:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.582 22:53:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:56.582 Found net devices under 0000:af:00.1: cvl_0_1 00:07:56.582 22:53:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.582 22:53:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:56.582 22:53:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:56.582 22:53:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:56.582 22:53:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:56.582 22:53:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:56.582 22:53:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.582 22:53:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.582 22:53:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.582 22:53:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:56.582 22:53:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.582 22:53:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.582 22:53:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:56.582 22:53:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.582 22:53:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.582 22:53:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:56.582 22:53:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:56.582 22:53:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.582 22:53:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.582 22:53:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.582 22:53:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.582 22:53:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:56.582 22:53:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.582 22:53:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.582 22:53:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.582 22:53:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:56.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:07:56.582 00:07:56.582 --- 10.0.0.2 ping statistics --- 00:07:56.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.582 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:07:56.582 22:53:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:07:56.582 00:07:56.582 --- 10.0.0.1 ping statistics --- 00:07:56.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.582 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:07:56.582 22:53:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.582 22:53:28 -- nvmf/common.sh@410 -- # return 0 00:07:56.582 22:53:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:56.582 22:53:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.582 22:53:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:56.582 22:53:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:56.582 22:53:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.582 22:53:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:56.582 22:53:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:56.582 22:53:28 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:56.582 22:53:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:56.582 22:53:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:56.582 22:53:28 -- common/autotest_common.sh@10 -- # set +x 00:07:56.582 22:53:28 -- nvmf/common.sh@469 -- # nvmfpid=3063639 00:07:56.582 22:53:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:56.582 22:53:28 -- nvmf/common.sh@470 -- # waitforlisten 3063639 00:07:56.582 22:53:28 -- common/autotest_common.sh@819 -- # '[' -z 3063639 ']' 00:07:56.582 22:53:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.582 22:53:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:56.582 22:53:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.582 22:53:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:56.582 22:53:28 -- common/autotest_common.sh@10 -- # set +x 00:07:56.582 [2024-07-24 22:53:29.008105] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:56.582 [2024-07-24 22:53:29.008158] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.842 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.842 [2024-07-24 22:53:29.083316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:56.842 [2024-07-24 22:53:29.122003] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:56.842 [2024-07-24 22:53:29.122113] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.842 [2024-07-24 22:53:29.122123] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.842 [2024-07-24 22:53:29.122132] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.842 [2024-07-24 22:53:29.122176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.842 [2024-07-24 22:53:29.122291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.842 [2024-07-24 22:53:29.122364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.842 [2024-07-24 22:53:29.122365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.409 22:53:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:57.409 22:53:29 -- common/autotest_common.sh@852 -- # return 0 00:07:57.409 22:53:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:57.409 22:53:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:57.409 22:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 22:53:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.669 22:53:29 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:57.669 22:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.669 22:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 [2024-07-24 22:53:29.864247] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.669 22:53:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.669 22:53:29 -- target/discovery.sh@26 -- # seq 1 4 00:07:57.669 22:53:29 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:57.669 22:53:29 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:57.669 22:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.669 22:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 Null1 00:07:57.669 22:53:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.669 22:53:29 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:57.669 22:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.669 22:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 22:53:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.669 22:53:29 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:57.669 22:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.669 22:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 22:53:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.669 22:53:29 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.669 22:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.669 22:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 [2024-07-24 22:53:29.916580] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.669 22:53:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.669 22:53:29 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:57.669 22:53:29 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:57.669 22:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.669 22:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 Null2 00:07:57.669 22:53:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.669 22:53:29 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:57.669 22:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.669 22:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 22:53:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.669 22:53:29 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:57.669 22:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.669 22:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 22:53:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.669 22:53:29 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:57.669 22:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.669 22:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 22:53:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.669 22:53:29 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:57.669 22:53:29 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:57.669 22:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.669 22:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 Null3 00:07:57.669 22:53:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.669 22:53:29 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:57.669 22:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.669 22:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 22:53:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.669 22:53:29 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:57.669 22:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.669 22:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 22:53:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.669 22:53:29 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:57.669 22:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.669 22:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 22:53:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.669 22:53:29 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:57.669 22:53:29 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:57.669 22:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.669 22:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 Null4 00:07:57.669 22:53:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.669 22:53:29 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:57.669 22:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.669 22:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 22:53:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.669 22:53:30 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:57.669 22:53:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.669 22:53:30 -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 22:53:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.669 22:53:30 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:57.669 22:53:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.669 22:53:30 -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 22:53:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.669 22:53:30 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:57.669 22:53:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.669 22:53:30 -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 22:53:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.669 22:53:30 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:57.669 22:53:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.669 22:53:30 -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 22:53:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.669 22:53:30 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:07:57.929 00:07:57.929 Discovery Log Number of Records 6, Generation counter 6 00:07:57.929 =====Discovery Log Entry 0====== 00:07:57.929 trtype: tcp 00:07:57.929 adrfam: ipv4 00:07:57.929 subtype: current discovery subsystem 00:07:57.929 treq: not required 00:07:57.929 portid: 0 00:07:57.929 trsvcid: 4420 00:07:57.929 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:57.929 traddr: 10.0.0.2 00:07:57.929 eflags: explicit discovery connections, duplicate discovery information 00:07:57.929 sectype: none 00:07:57.929 =====Discovery Log Entry 1====== 00:07:57.929 trtype: tcp 00:07:57.929 adrfam: ipv4 00:07:57.929 subtype: nvme subsystem 00:07:57.929 treq: not required 00:07:57.929 portid: 0 00:07:57.929 trsvcid: 4420 00:07:57.929 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:57.929 traddr: 10.0.0.2 00:07:57.929 eflags: none 00:07:57.929 sectype: none 00:07:57.929 =====Discovery Log Entry 2====== 00:07:57.929 trtype: tcp 00:07:57.929 adrfam: ipv4 00:07:57.929 subtype: nvme subsystem 00:07:57.929 treq: not required 00:07:57.929 portid: 0 00:07:57.929 trsvcid: 4420 00:07:57.929 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:57.929 traddr: 10.0.0.2 00:07:57.929 eflags: none 00:07:57.929 sectype: none 00:07:57.929 =====Discovery Log Entry 3====== 00:07:57.929 trtype: tcp 00:07:57.929 adrfam: ipv4 00:07:57.929 subtype: nvme subsystem 00:07:57.929 treq: not required 00:07:57.929 portid: 0 00:07:57.929 trsvcid: 4420 00:07:57.929 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:57.929 traddr: 10.0.0.2 00:07:57.929 eflags: none 00:07:57.929 sectype: none 00:07:57.929 =====Discovery Log Entry 4====== 00:07:57.929 trtype: tcp 00:07:57.929 adrfam: ipv4 00:07:57.929 subtype: nvme subsystem 00:07:57.929 treq: not required 00:07:57.929 portid: 0 00:07:57.929 trsvcid: 4420 00:07:57.929 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:57.929 traddr: 10.0.0.2 00:07:57.929 eflags: none 00:07:57.929 sectype: none 00:07:57.929 =====Discovery Log Entry 5====== 00:07:57.929 trtype: tcp 00:07:57.929 adrfam: ipv4 00:07:57.929 subtype: discovery subsystem referral 00:07:57.929 treq: not required 00:07:57.929 portid: 0 00:07:57.929 trsvcid: 4430 00:07:57.929 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:57.929 traddr: 10.0.0.2 00:07:57.929 eflags: none 00:07:57.929 sectype: none 00:07:57.929 22:53:30 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:57.929 Perform nvmf subsystem discovery via RPC 00:07:57.929 22:53:30 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:57.929 22:53:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.929 22:53:30 -- common/autotest_common.sh@10 -- # set +x 00:07:57.929 [2024-07-24 22:53:30.253627] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:57.929 [ 00:07:57.929 { 00:07:57.929 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:57.929 "subtype": "Discovery", 00:07:57.929 "listen_addresses": [ 00:07:57.929 { 00:07:57.929 "transport": "TCP", 00:07:57.929 "trtype": "TCP", 00:07:57.929 "adrfam": "IPv4", 00:07:57.929 "traddr": "10.0.0.2", 00:07:57.929 "trsvcid": "4420" 00:07:57.929 } 00:07:57.929 ], 00:07:57.929 "allow_any_host": true, 00:07:57.929 "hosts": [] 00:07:57.929 }, 00:07:57.929 { 00:07:57.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:57.929 "subtype": "NVMe", 00:07:57.929 "listen_addresses": [ 00:07:57.929 { 00:07:57.929 "transport": "TCP", 00:07:57.929 "trtype": "TCP", 00:07:57.929 "adrfam": "IPv4", 00:07:57.929 "traddr": "10.0.0.2", 00:07:57.929 "trsvcid": "4420" 00:07:57.929 } 00:07:57.929 ], 00:07:57.929 "allow_any_host": true, 00:07:57.929 "hosts": [], 00:07:57.929 "serial_number": "SPDK00000000000001", 00:07:57.929 "model_number": "SPDK bdev Controller", 00:07:57.929 "max_namespaces": 32, 00:07:57.929 "min_cntlid": 1, 00:07:57.929 "max_cntlid": 65519, 00:07:57.929 "namespaces": [ 00:07:57.929 { 00:07:57.929 "nsid": 1, 00:07:57.929 "bdev_name": "Null1", 00:07:57.929 "name": "Null1", 00:07:57.929 "nguid": "65C54A60F0F44076B8C8D7CF52C115AD", 00:07:57.929 "uuid": "65c54a60-f0f4-4076-b8c8-d7cf52c115ad" 00:07:57.929 } 00:07:57.929 ] 00:07:57.929 }, 00:07:57.929 { 00:07:57.929 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:57.929 "subtype": "NVMe", 00:07:57.929 "listen_addresses": [ 00:07:57.929 { 00:07:57.929 "transport": "TCP", 00:07:57.929 "trtype": "TCP", 00:07:57.929 "adrfam": "IPv4", 00:07:57.929 "traddr": "10.0.0.2", 00:07:57.929 "trsvcid": "4420" 00:07:57.929 } 00:07:57.929 ], 00:07:57.929 "allow_any_host": true, 00:07:57.929 "hosts": [], 00:07:57.929 "serial_number": "SPDK00000000000002", 00:07:57.929 "model_number": "SPDK bdev Controller", 00:07:57.929 "max_namespaces": 32, 00:07:57.929 "min_cntlid": 1, 00:07:57.929 "max_cntlid": 65519, 00:07:57.929 "namespaces": [ 00:07:57.929 { 00:07:57.929 "nsid": 1, 00:07:57.929 "bdev_name": "Null2", 00:07:57.929 "name": "Null2", 00:07:57.929 "nguid": "844874BBD57D400488AD7DF7FA591EFD", 00:07:57.929 "uuid": "844874bb-d57d-4004-88ad-7df7fa591efd" 00:07:57.929 } 00:07:57.929 ] 00:07:57.929 }, 00:07:57.929 { 00:07:57.929 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:57.929 "subtype": "NVMe", 00:07:57.929 "listen_addresses": [ 00:07:57.929 { 00:07:57.929 "transport": "TCP", 00:07:57.929 "trtype": "TCP", 00:07:57.929 "adrfam": "IPv4", 00:07:57.929 "traddr": "10.0.0.2", 00:07:57.929 "trsvcid": "4420" 00:07:57.929 } 00:07:57.929 ], 00:07:57.929 "allow_any_host": true, 00:07:57.929 "hosts": [], 00:07:57.929 "serial_number": "SPDK00000000000003", 00:07:57.929 "model_number": "SPDK bdev Controller", 00:07:57.929 "max_namespaces": 32, 00:07:57.929 "min_cntlid": 1, 00:07:57.929 "max_cntlid": 65519, 00:07:57.929 "namespaces": [ 00:07:57.929 { 00:07:57.929 "nsid": 1, 00:07:57.929 "bdev_name": "Null3", 00:07:57.929 "name": "Null3", 00:07:57.929 "nguid": "C07E666A10A342BEB9571E9CBEA70C53", 00:07:57.929 "uuid": "c07e666a-10a3-42be-b957-1e9cbea70c53" 00:07:57.929 } 00:07:57.929 ] 00:07:57.929 }, 00:07:57.929 { 00:07:57.929 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:57.929 "subtype": "NVMe", 00:07:57.929 "listen_addresses": [ 00:07:57.929 { 00:07:57.929 "transport": "TCP", 00:07:57.929 "trtype": "TCP", 00:07:57.929 "adrfam": "IPv4", 00:07:57.929 "traddr": "10.0.0.2", 00:07:57.929 "trsvcid": "4420" 00:07:57.929 } 00:07:57.929 ], 00:07:57.929 "allow_any_host": true, 00:07:57.929 "hosts": [], 00:07:57.929 "serial_number": "SPDK00000000000004", 00:07:57.929 "model_number": "SPDK bdev Controller", 00:07:57.929 "max_namespaces": 32, 00:07:57.929 "min_cntlid": 1, 00:07:57.929 "max_cntlid": 65519, 00:07:57.929 "namespaces": [ 00:07:57.929 { 00:07:57.929 "nsid": 1, 00:07:57.929 "bdev_name": "Null4", 00:07:57.929 "name": "Null4", 00:07:57.929 "nguid": "175A391C0B03448C8695F272F57D5A50", 00:07:57.929 "uuid": "175a391c-0b03-448c-8695-f272f57d5a50" 00:07:57.929 } 00:07:57.929 ] 00:07:57.929 } 00:07:57.929 ] 00:07:57.929 22:53:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.929 22:53:30 -- target/discovery.sh@42 -- # seq 1 4 00:07:57.929 22:53:30 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:57.929 22:53:30 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:57.929 22:53:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.929 22:53:30 -- common/autotest_common.sh@10 -- # set +x 00:07:57.929 22:53:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.930 22:53:30 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:57.930 22:53:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.930 22:53:30 -- common/autotest_common.sh@10 -- # set +x 00:07:57.930 22:53:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.930 22:53:30 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:57.930 22:53:30 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:57.930 22:53:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.930 22:53:30 -- common/autotest_common.sh@10 -- # set +x 00:07:57.930 22:53:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.930 22:53:30 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:57.930 22:53:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.930 22:53:30 -- common/autotest_common.sh@10 -- # set +x 00:07:57.930 22:53:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.930 22:53:30 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:57.930 22:53:30 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:57.930 22:53:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.930 22:53:30 -- common/autotest_common.sh@10 -- # set +x 00:07:57.930 22:53:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.930 22:53:30 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:57.930 22:53:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.930 22:53:30 -- common/autotest_common.sh@10 -- # set +x 00:07:57.930 22:53:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.930 22:53:30 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:57.930 22:53:30 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:57.930 22:53:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.930 22:53:30 -- common/autotest_common.sh@10 -- # set +x 00:07:57.930 22:53:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.930 22:53:30 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:57.930 22:53:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.930 22:53:30 -- common/autotest_common.sh@10 -- # set +x 00:07:57.930 22:53:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.930 22:53:30 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:58.189 22:53:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:58.189 22:53:30 -- common/autotest_common.sh@10 -- # set +x 00:07:58.190 22:53:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:58.190 22:53:30 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:58.190 22:53:30 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:58.190 22:53:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:58.190 22:53:30 -- common/autotest_common.sh@10 -- # set +x 00:07:58.190 22:53:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:58.190 22:53:30 -- target/discovery.sh@49 -- # check_bdevs= 00:07:58.190 22:53:30 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:58.190 22:53:30 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:58.190 22:53:30 -- target/discovery.sh@57 -- # nvmftestfini 00:07:58.190 22:53:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:58.190 22:53:30 -- nvmf/common.sh@116 -- # sync 00:07:58.190 22:53:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:58.190 22:53:30 -- nvmf/common.sh@119 -- # set +e 00:07:58.190 22:53:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:58.190 22:53:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:58.190 rmmod nvme_tcp 00:07:58.190 rmmod nvme_fabrics 00:07:58.190 rmmod nvme_keyring 00:07:58.190 22:53:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:58.190 22:53:30 -- nvmf/common.sh@123 -- # set -e 00:07:58.190 22:53:30 -- nvmf/common.sh@124 -- # return 0 00:07:58.190 22:53:30 -- nvmf/common.sh@477 -- # '[' -n 3063639 ']' 00:07:58.190 22:53:30 -- nvmf/common.sh@478 -- # killprocess 3063639 00:07:58.190 22:53:30 -- common/autotest_common.sh@926 -- # '[' -z 3063639 ']' 00:07:58.190 22:53:30 -- common/autotest_common.sh@930 -- # kill -0 3063639 00:07:58.190 22:53:30 -- common/autotest_common.sh@931 -- # uname 00:07:58.190 22:53:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:58.190 22:53:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3063639 00:07:58.190 22:53:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:58.190 22:53:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:58.190 22:53:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3063639' 00:07:58.190 killing process with pid 3063639 00:07:58.190 22:53:30 -- common/autotest_common.sh@945 -- # kill 3063639 00:07:58.190 [2024-07-24 22:53:30.547026] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:58.190 22:53:30 -- common/autotest_common.sh@950 -- # wait 3063639 00:07:58.449 22:53:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:58.449 22:53:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:58.449 22:53:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:58.449 22:53:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:58.449 22:53:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:58.449 22:53:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.449 22:53:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.449 22:53:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.987 22:53:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:00.987 00:08:00.987 real 0m10.761s 00:08:00.987 user 0m8.429s 00:08:00.987 sys 0m5.553s 00:08:00.987 22:53:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.987 22:53:32 -- common/autotest_common.sh@10 -- # set +x 00:08:00.987 ************************************ 00:08:00.987 END TEST nvmf_discovery 00:08:00.987 ************************************ 00:08:00.987 22:53:32 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:00.987 22:53:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:00.987 22:53:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:00.987 22:53:32 -- common/autotest_common.sh@10 -- # set +x 00:08:00.987 ************************************ 00:08:00.987 START TEST nvmf_referrals 00:08:00.987 ************************************ 00:08:00.987 22:53:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:00.987 * Looking for test storage... 00:08:00.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.987 22:53:32 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.987 22:53:32 -- nvmf/common.sh@7 -- # uname -s 00:08:00.987 22:53:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.987 22:53:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.987 22:53:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.987 22:53:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.987 22:53:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.987 22:53:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.987 22:53:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.987 22:53:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.987 22:53:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.987 22:53:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.987 22:53:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:00.987 22:53:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:00.987 22:53:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.987 22:53:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.987 22:53:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.987 22:53:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.987 22:53:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.987 22:53:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.987 22:53:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.987 22:53:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.987 22:53:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.987 22:53:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.987 22:53:32 -- paths/export.sh@5 -- # export PATH 00:08:00.987 22:53:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.987 22:53:32 -- nvmf/common.sh@46 -- # : 0 00:08:00.987 22:53:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:00.987 22:53:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:00.987 22:53:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:00.987 22:53:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.987 22:53:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.987 22:53:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:00.987 22:53:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:00.987 22:53:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:00.987 22:53:32 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:00.987 22:53:32 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:00.987 22:53:32 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:00.987 22:53:32 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:00.987 22:53:32 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:00.987 22:53:32 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:00.987 22:53:32 -- target/referrals.sh@37 -- # nvmftestinit 00:08:00.987 22:53:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:00.987 22:53:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.987 22:53:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:00.987 22:53:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:00.987 22:53:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:00.987 22:53:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.987 22:53:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.987 22:53:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.987 22:53:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:00.987 22:53:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:00.987 22:53:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:00.987 22:53:32 -- common/autotest_common.sh@10 -- # set +x 00:08:07.655 22:53:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:07.655 22:53:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:07.655 22:53:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:07.655 22:53:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:07.655 22:53:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:07.655 22:53:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:07.655 22:53:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:07.655 22:53:39 -- nvmf/common.sh@294 -- # net_devs=() 00:08:07.655 22:53:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:07.655 22:53:39 -- nvmf/common.sh@295 -- # e810=() 00:08:07.655 22:53:39 -- nvmf/common.sh@295 -- # local -ga e810 00:08:07.655 22:53:39 -- nvmf/common.sh@296 -- # x722=() 00:08:07.655 22:53:39 -- nvmf/common.sh@296 -- # local -ga x722 00:08:07.655 22:53:39 -- nvmf/common.sh@297 -- # mlx=() 00:08:07.655 22:53:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:07.655 22:53:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.655 22:53:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.655 22:53:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.655 22:53:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.655 22:53:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.655 22:53:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.655 22:53:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.655 22:53:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.655 22:53:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.655 22:53:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.655 22:53:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.655 22:53:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:07.655 22:53:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:07.655 22:53:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:07.655 22:53:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:07.655 22:53:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:07.655 22:53:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:07.655 22:53:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:07.655 22:53:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:07.655 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:07.655 22:53:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:07.655 22:53:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:07.655 22:53:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.655 22:53:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.655 22:53:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:07.655 22:53:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:07.655 22:53:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:07.655 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:07.655 22:53:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:07.655 22:53:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:07.655 22:53:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.655 22:53:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.655 22:53:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:07.655 22:53:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:07.655 22:53:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:07.655 22:53:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:07.655 22:53:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:07.655 22:53:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.655 22:53:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:07.655 22:53:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.655 22:53:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:07.655 Found net devices under 0000:af:00.0: cvl_0_0 00:08:07.655 22:53:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.655 22:53:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:07.655 22:53:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.655 22:53:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:07.655 22:53:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.655 22:53:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:07.655 Found net devices under 0000:af:00.1: cvl_0_1 00:08:07.655 22:53:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.655 22:53:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:07.655 22:53:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:07.655 22:53:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:07.655 22:53:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:07.655 22:53:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:07.655 22:53:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.656 22:53:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.656 22:53:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.656 22:53:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:07.656 22:53:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.656 22:53:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.656 22:53:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:07.656 22:53:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.656 22:53:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.656 22:53:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:07.656 22:53:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:07.656 22:53:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.656 22:53:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.656 22:53:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.656 22:53:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.656 22:53:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:07.656 22:53:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.656 22:53:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.656 22:53:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.656 22:53:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:07.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:08:07.656 00:08:07.656 --- 10.0.0.2 ping statistics --- 00:08:07.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.656 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:08:07.656 22:53:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:08:07.656 00:08:07.656 --- 10.0.0.1 ping statistics --- 00:08:07.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.656 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:08:07.656 22:53:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.656 22:53:39 -- nvmf/common.sh@410 -- # return 0 00:08:07.656 22:53:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:07.656 22:53:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.656 22:53:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:07.656 22:53:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:07.656 22:53:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.656 22:53:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:07.656 22:53:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:07.656 22:53:39 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:07.656 22:53:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:07.656 22:53:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:07.656 22:53:39 -- common/autotest_common.sh@10 -- # set +x 00:08:07.656 22:53:39 -- nvmf/common.sh@469 -- # nvmfpid=3067637 00:08:07.656 22:53:39 -- nvmf/common.sh@470 -- # waitforlisten 3067637 00:08:07.656 22:53:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:07.656 22:53:39 -- common/autotest_common.sh@819 -- # '[' -z 3067637 ']' 00:08:07.656 22:53:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.656 22:53:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:07.656 22:53:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.656 22:53:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:07.656 22:53:39 -- common/autotest_common.sh@10 -- # set +x 00:08:07.656 [2024-07-24 22:53:39.807098] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:07.656 [2024-07-24 22:53:39.807154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.656 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.656 [2024-07-24 22:53:39.882361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.656 [2024-07-24 22:53:39.920871] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:07.656 [2024-07-24 22:53:39.920978] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.656 [2024-07-24 22:53:39.920987] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.656 [2024-07-24 22:53:39.920995] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.656 [2024-07-24 22:53:39.921045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.656 [2024-07-24 22:53:39.921158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.656 [2024-07-24 22:53:39.921241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.656 [2024-07-24 22:53:39.921242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.223 22:53:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:08.223 22:53:40 -- common/autotest_common.sh@852 -- # return 0 00:08:08.223 22:53:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:08.223 22:53:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:08.223 22:53:40 -- common/autotest_common.sh@10 -- # set +x 00:08:08.482 22:53:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.482 22:53:40 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:08.482 22:53:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:08.482 22:53:40 -- common/autotest_common.sh@10 -- # set +x 00:08:08.482 [2024-07-24 22:53:40.671141] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.482 22:53:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:08.482 22:53:40 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:08.482 22:53:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:08.482 22:53:40 -- common/autotest_common.sh@10 -- # set +x 00:08:08.482 [2024-07-24 22:53:40.687339] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:08.482 22:53:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:08.482 22:53:40 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:08.482 22:53:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:08.482 22:53:40 -- common/autotest_common.sh@10 -- # set +x 00:08:08.482 22:53:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:08.482 22:53:40 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:08.482 22:53:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:08.482 22:53:40 -- common/autotest_common.sh@10 -- # set +x 00:08:08.482 22:53:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:08.482 22:53:40 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:08.482 22:53:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:08.482 22:53:40 -- common/autotest_common.sh@10 -- # set +x 00:08:08.482 22:53:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:08.482 22:53:40 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:08.482 22:53:40 -- target/referrals.sh@48 -- # jq length 00:08:08.482 22:53:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:08.482 22:53:40 -- common/autotest_common.sh@10 -- # set +x 00:08:08.482 22:53:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:08.482 22:53:40 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:08.482 22:53:40 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:08.482 22:53:40 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:08.482 22:53:40 -- target/referrals.sh@21 -- # sort 00:08:08.482 22:53:40 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:08.482 22:53:40 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:08.482 22:53:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:08.482 22:53:40 -- common/autotest_common.sh@10 -- # set +x 00:08:08.482 22:53:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:08.482 22:53:40 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:08.482 22:53:40 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:08.482 22:53:40 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:08.482 22:53:40 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:08.482 22:53:40 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:08.482 22:53:40 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:08.482 22:53:40 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:08.482 22:53:40 -- target/referrals.sh@26 -- # sort 00:08:08.742 22:53:41 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:08.742 22:53:41 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:08.742 22:53:41 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:08.742 22:53:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:08.742 22:53:41 -- common/autotest_common.sh@10 -- # set +x 00:08:08.742 22:53:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:08.742 22:53:41 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:08.742 22:53:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:08.742 22:53:41 -- common/autotest_common.sh@10 -- # set +x 00:08:08.742 22:53:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:08.742 22:53:41 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:08.742 22:53:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:08.742 22:53:41 -- common/autotest_common.sh@10 -- # set +x 00:08:08.742 22:53:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:08.742 22:53:41 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:08.742 22:53:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:08.742 22:53:41 -- common/autotest_common.sh@10 -- # set +x 00:08:08.742 22:53:41 -- target/referrals.sh@56 -- # jq length 00:08:08.742 22:53:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:08.742 22:53:41 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:08.742 22:53:41 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:08.742 22:53:41 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:08.742 22:53:41 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:08.742 22:53:41 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:08.742 22:53:41 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:08.742 22:53:41 -- target/referrals.sh@26 -- # sort 00:08:09.002 22:53:41 -- target/referrals.sh@26 -- # echo 00:08:09.002 22:53:41 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:09.002 22:53:41 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:09.002 22:53:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.002 22:53:41 -- common/autotest_common.sh@10 -- # set +x 00:08:09.002 22:53:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.002 22:53:41 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:09.002 22:53:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.002 22:53:41 -- common/autotest_common.sh@10 -- # set +x 00:08:09.002 22:53:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.002 22:53:41 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:09.002 22:53:41 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:09.002 22:53:41 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:09.002 22:53:41 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:09.002 22:53:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.002 22:53:41 -- common/autotest_common.sh@10 -- # set +x 00:08:09.002 22:53:41 -- target/referrals.sh@21 -- # sort 00:08:09.002 22:53:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.002 22:53:41 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:09.002 22:53:41 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:09.002 22:53:41 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:09.002 22:53:41 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:09.002 22:53:41 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:09.002 22:53:41 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.002 22:53:41 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:09.002 22:53:41 -- target/referrals.sh@26 -- # sort 00:08:09.261 22:53:41 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:09.261 22:53:41 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:09.261 22:53:41 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:09.261 22:53:41 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:09.261 22:53:41 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:09.262 22:53:41 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.262 22:53:41 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:09.262 22:53:41 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:09.262 22:53:41 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:09.262 22:53:41 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:09.262 22:53:41 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:09.262 22:53:41 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.262 22:53:41 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:09.262 22:53:41 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:09.262 22:53:41 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:09.262 22:53:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.262 22:53:41 -- common/autotest_common.sh@10 -- # set +x 00:08:09.262 22:53:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.262 22:53:41 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:09.262 22:53:41 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:09.262 22:53:41 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:09.262 22:53:41 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:09.262 22:53:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.262 22:53:41 -- target/referrals.sh@21 -- # sort 00:08:09.262 22:53:41 -- common/autotest_common.sh@10 -- # set +x 00:08:09.262 22:53:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.521 22:53:41 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:09.521 22:53:41 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:09.521 22:53:41 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:09.521 22:53:41 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:09.521 22:53:41 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:09.521 22:53:41 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.521 22:53:41 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:09.521 22:53:41 -- target/referrals.sh@26 -- # sort 00:08:09.521 22:53:41 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:09.521 22:53:41 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:09.521 22:53:41 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:09.521 22:53:41 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:09.521 22:53:41 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:09.521 22:53:41 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.521 22:53:41 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:09.521 22:53:41 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:09.521 22:53:41 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:09.521 22:53:41 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:09.521 22:53:41 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:09.521 22:53:41 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.521 22:53:41 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:09.781 22:53:42 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:09.781 22:53:42 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:09.781 22:53:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.781 22:53:42 -- common/autotest_common.sh@10 -- # set +x 00:08:09.781 22:53:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.781 22:53:42 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:09.781 22:53:42 -- target/referrals.sh@82 -- # jq length 00:08:09.781 22:53:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.781 22:53:42 -- common/autotest_common.sh@10 -- # set +x 00:08:09.781 22:53:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.781 22:53:42 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:09.781 22:53:42 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:09.781 22:53:42 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:09.781 22:53:42 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:09.781 22:53:42 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.781 22:53:42 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:09.781 22:53:42 -- target/referrals.sh@26 -- # sort 00:08:09.781 22:53:42 -- target/referrals.sh@26 -- # echo 00:08:09.781 22:53:42 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:09.781 22:53:42 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:09.781 22:53:42 -- target/referrals.sh@86 -- # nvmftestfini 00:08:09.781 22:53:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:09.781 22:53:42 -- nvmf/common.sh@116 -- # sync 00:08:09.781 22:53:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:09.781 22:53:42 -- nvmf/common.sh@119 -- # set +e 00:08:09.781 22:53:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:09.781 22:53:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:09.781 rmmod nvme_tcp 00:08:10.041 rmmod nvme_fabrics 00:08:10.041 rmmod nvme_keyring 00:08:10.041 22:53:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:10.041 22:53:42 -- nvmf/common.sh@123 -- # set -e 00:08:10.041 22:53:42 -- nvmf/common.sh@124 -- # return 0 00:08:10.041 22:53:42 -- nvmf/common.sh@477 -- # '[' -n 3067637 ']' 00:08:10.041 22:53:42 -- nvmf/common.sh@478 -- # killprocess 3067637 00:08:10.041 22:53:42 -- common/autotest_common.sh@926 -- # '[' -z 3067637 ']' 00:08:10.041 22:53:42 -- common/autotest_common.sh@930 -- # kill -0 3067637 00:08:10.041 22:53:42 -- common/autotest_common.sh@931 -- # uname 00:08:10.041 22:53:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:10.041 22:53:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3067637 00:08:10.041 22:53:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:10.041 22:53:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:10.041 22:53:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3067637' 00:08:10.041 killing process with pid 3067637 00:08:10.041 22:53:42 -- common/autotest_common.sh@945 -- # kill 3067637 00:08:10.041 22:53:42 -- common/autotest_common.sh@950 -- # wait 3067637 00:08:10.301 22:53:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:10.301 22:53:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:10.301 22:53:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:10.301 22:53:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:10.301 22:53:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:10.301 22:53:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.301 22:53:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.301 22:53:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.208 22:53:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:12.208 00:08:12.208 real 0m11.724s 00:08:12.208 user 0m12.927s 00:08:12.208 sys 0m5.955s 00:08:12.208 22:53:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.208 22:53:44 -- common/autotest_common.sh@10 -- # set +x 00:08:12.208 ************************************ 00:08:12.208 END TEST nvmf_referrals 00:08:12.208 ************************************ 00:08:12.209 22:53:44 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:12.209 22:53:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:12.209 22:53:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:12.209 22:53:44 -- common/autotest_common.sh@10 -- # set +x 00:08:12.209 ************************************ 00:08:12.209 START TEST nvmf_connect_disconnect 00:08:12.209 ************************************ 00:08:12.209 22:53:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:12.468 * Looking for test storage... 00:08:12.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:12.469 22:53:44 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.469 22:53:44 -- nvmf/common.sh@7 -- # uname -s 00:08:12.469 22:53:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.469 22:53:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.469 22:53:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.469 22:53:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.469 22:53:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.469 22:53:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.469 22:53:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.469 22:53:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.469 22:53:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.469 22:53:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.469 22:53:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:12.469 22:53:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:12.469 22:53:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.469 22:53:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.469 22:53:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.469 22:53:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.469 22:53:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.469 22:53:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.469 22:53:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.469 22:53:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.469 22:53:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.469 22:53:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.469 22:53:44 -- paths/export.sh@5 -- # export PATH 00:08:12.469 22:53:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.469 22:53:44 -- nvmf/common.sh@46 -- # : 0 00:08:12.469 22:53:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:12.469 22:53:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:12.469 22:53:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:12.469 22:53:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.469 22:53:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.469 22:53:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:12.469 22:53:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:12.469 22:53:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:12.469 22:53:44 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:12.469 22:53:44 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:12.469 22:53:44 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:12.469 22:53:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:12.469 22:53:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.469 22:53:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:12.469 22:53:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:12.469 22:53:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:12.469 22:53:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.469 22:53:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.469 22:53:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.469 22:53:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:12.469 22:53:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:12.469 22:53:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:12.469 22:53:44 -- common/autotest_common.sh@10 -- # set +x 00:08:19.048 22:53:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:19.048 22:53:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:19.048 22:53:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:19.048 22:53:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:19.048 22:53:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:19.048 22:53:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:19.048 22:53:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:19.048 22:53:50 -- nvmf/common.sh@294 -- # net_devs=() 00:08:19.048 22:53:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:19.048 22:53:50 -- nvmf/common.sh@295 -- # e810=() 00:08:19.048 22:53:50 -- nvmf/common.sh@295 -- # local -ga e810 00:08:19.048 22:53:50 -- nvmf/common.sh@296 -- # x722=() 00:08:19.048 22:53:50 -- nvmf/common.sh@296 -- # local -ga x722 00:08:19.048 22:53:50 -- nvmf/common.sh@297 -- # mlx=() 00:08:19.048 22:53:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:19.048 22:53:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.048 22:53:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.048 22:53:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.048 22:53:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.048 22:53:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.048 22:53:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.048 22:53:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.048 22:53:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.048 22:53:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.048 22:53:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.048 22:53:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.048 22:53:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:19.048 22:53:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:19.048 22:53:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:19.048 22:53:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:19.048 22:53:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:19.048 22:53:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:19.048 22:53:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:19.048 22:53:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:19.048 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:19.048 22:53:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:19.048 22:53:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:19.048 22:53:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.048 22:53:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.048 22:53:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:19.048 22:53:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:19.048 22:53:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:19.048 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:19.048 22:53:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:19.048 22:53:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:19.048 22:53:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.048 22:53:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.048 22:53:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:19.048 22:53:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:19.048 22:53:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:19.048 22:53:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:19.048 22:53:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:19.048 22:53:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.048 22:53:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:19.048 22:53:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.048 22:53:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:19.048 Found net devices under 0000:af:00.0: cvl_0_0 00:08:19.048 22:53:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.048 22:53:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:19.048 22:53:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.048 22:53:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:19.048 22:53:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.048 22:53:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:19.048 Found net devices under 0000:af:00.1: cvl_0_1 00:08:19.048 22:53:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.048 22:53:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:19.048 22:53:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:19.048 22:53:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:19.048 22:53:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:19.048 22:53:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:19.048 22:53:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.049 22:53:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.049 22:53:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.049 22:53:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:19.049 22:53:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.049 22:53:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.049 22:53:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:19.049 22:53:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.049 22:53:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.049 22:53:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:19.049 22:53:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:19.049 22:53:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.049 22:53:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.049 22:53:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.049 22:53:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.049 22:53:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:19.049 22:53:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.049 22:53:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.049 22:53:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.049 22:53:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:19.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:08:19.049 00:08:19.049 --- 10.0.0.2 ping statistics --- 00:08:19.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.049 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:08:19.049 22:53:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:08:19.049 00:08:19.049 --- 10.0.0.1 ping statistics --- 00:08:19.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.049 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:08:19.049 22:53:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.049 22:53:51 -- nvmf/common.sh@410 -- # return 0 00:08:19.049 22:53:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:19.049 22:53:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.049 22:53:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:19.049 22:53:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:19.049 22:53:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.049 22:53:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:19.049 22:53:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:19.049 22:53:51 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:19.049 22:53:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:19.049 22:53:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:19.049 22:53:51 -- common/autotest_common.sh@10 -- # set +x 00:08:19.049 22:53:51 -- nvmf/common.sh@469 -- # nvmfpid=3071831 00:08:19.049 22:53:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:19.049 22:53:51 -- nvmf/common.sh@470 -- # waitforlisten 3071831 00:08:19.049 22:53:51 -- common/autotest_common.sh@819 -- # '[' -z 3071831 ']' 00:08:19.049 22:53:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.049 22:53:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:19.049 22:53:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.049 22:53:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:19.049 22:53:51 -- common/autotest_common.sh@10 -- # set +x 00:08:19.049 [2024-07-24 22:53:51.392125] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:19.049 [2024-07-24 22:53:51.392177] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.049 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.049 [2024-07-24 22:53:51.468508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.309 [2024-07-24 22:53:51.508644] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:19.309 [2024-07-24 22:53:51.508755] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.309 [2024-07-24 22:53:51.508766] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.309 [2024-07-24 22:53:51.508775] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.309 [2024-07-24 22:53:51.508812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.309 [2024-07-24 22:53:51.508831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.309 [2024-07-24 22:53:51.508921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.309 [2024-07-24 22:53:51.508923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.878 22:53:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:19.878 22:53:52 -- common/autotest_common.sh@852 -- # return 0 00:08:19.878 22:53:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:19.879 22:53:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:19.879 22:53:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.879 22:53:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.879 22:53:52 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:19.879 22:53:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.879 22:53:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.879 [2024-07-24 22:53:52.249036] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.879 22:53:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.879 22:53:52 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:19.879 22:53:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.879 22:53:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.879 22:53:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.879 22:53:52 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:19.879 22:53:52 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:19.879 22:53:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.879 22:53:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.879 22:53:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.879 22:53:52 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:19.879 22:53:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.879 22:53:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.879 22:53:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.879 22:53:52 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.879 22:53:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.879 22:53:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.879 [2024-07-24 22:53:52.303825] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.879 22:53:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.879 22:53:52 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:19.879 22:53:52 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:19.879 22:53:52 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:20.139 22:53:52 -- target/connect_disconnect.sh@34 -- # set +x 00:08:22.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.266 22:57:44 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:12.266 22:57:44 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:12.266 22:57:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:12.266 22:57:44 -- nvmf/common.sh@116 -- # sync 00:12:12.266 22:57:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:12.266 22:57:44 -- nvmf/common.sh@119 -- # set +e 00:12:12.266 22:57:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:12.266 22:57:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:12.266 rmmod nvme_tcp 00:12:12.266 rmmod nvme_fabrics 00:12:12.266 rmmod nvme_keyring 00:12:12.266 22:57:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:12.266 22:57:44 -- nvmf/common.sh@123 -- # set -e 00:12:12.266 22:57:44 -- nvmf/common.sh@124 -- # return 0 00:12:12.266 22:57:44 -- nvmf/common.sh@477 -- # '[' -n 3071831 ']' 00:12:12.266 22:57:44 -- nvmf/common.sh@478 -- # killprocess 3071831 00:12:12.266 22:57:44 -- common/autotest_common.sh@926 -- # '[' -z 3071831 ']' 00:12:12.266 22:57:44 -- common/autotest_common.sh@930 -- # kill -0 3071831 00:12:12.266 22:57:44 -- common/autotest_common.sh@931 -- # uname 00:12:12.266 22:57:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:12.266 22:57:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3071831 00:12:12.266 22:57:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:12.266 22:57:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:12.266 22:57:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3071831' 00:12:12.266 killing process with pid 3071831 00:12:12.266 22:57:44 -- common/autotest_common.sh@945 -- # kill 3071831 00:12:12.266 22:57:44 -- common/autotest_common.sh@950 -- # wait 3071831 00:12:12.266 22:57:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:12.266 22:57:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:12.266 22:57:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:12.266 22:57:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:12.266 22:57:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:12.266 22:57:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.266 22:57:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:12.266 22:57:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.838 22:57:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:14.838 00:12:14.838 real 4m2.076s 00:12:14.838 user 15m8.806s 00:12:14.838 sys 0m39.887s 00:12:14.838 22:57:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.838 22:57:46 -- common/autotest_common.sh@10 -- # set +x 00:12:14.838 ************************************ 00:12:14.838 END TEST nvmf_connect_disconnect 00:12:14.838 ************************************ 00:12:14.838 22:57:46 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:14.838 22:57:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:14.838 22:57:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:14.838 22:57:46 -- common/autotest_common.sh@10 -- # set +x 00:12:14.838 ************************************ 00:12:14.838 START TEST nvmf_multitarget 00:12:14.838 ************************************ 00:12:14.838 22:57:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:14.838 * Looking for test storage... 00:12:14.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.838 22:57:46 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.838 22:57:46 -- nvmf/common.sh@7 -- # uname -s 00:12:14.838 22:57:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.838 22:57:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.838 22:57:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.838 22:57:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.838 22:57:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.838 22:57:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.838 22:57:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.838 22:57:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.838 22:57:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.838 22:57:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.838 22:57:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:14.838 22:57:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:14.838 22:57:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.838 22:57:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.838 22:57:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.838 22:57:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.838 22:57:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.839 22:57:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.839 22:57:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.839 22:57:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.839 22:57:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.839 22:57:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.839 22:57:46 -- paths/export.sh@5 -- # export PATH 00:12:14.839 22:57:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.839 22:57:46 -- nvmf/common.sh@46 -- # : 0 00:12:14.839 22:57:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:14.839 22:57:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:14.839 22:57:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:14.839 22:57:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.839 22:57:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.839 22:57:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:14.839 22:57:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:14.839 22:57:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:14.839 22:57:46 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:14.839 22:57:46 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:14.839 22:57:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:14.839 22:57:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.839 22:57:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:14.839 22:57:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:14.839 22:57:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:14.839 22:57:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.839 22:57:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.839 22:57:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.839 22:57:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:14.839 22:57:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:14.839 22:57:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:14.839 22:57:46 -- common/autotest_common.sh@10 -- # set +x 00:12:21.412 22:57:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:21.412 22:57:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:21.412 22:57:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:21.412 22:57:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:21.412 22:57:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:21.412 22:57:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:21.412 22:57:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:21.412 22:57:53 -- nvmf/common.sh@294 -- # net_devs=() 00:12:21.412 22:57:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:21.412 22:57:53 -- nvmf/common.sh@295 -- # e810=() 00:12:21.412 22:57:53 -- nvmf/common.sh@295 -- # local -ga e810 00:12:21.412 22:57:53 -- nvmf/common.sh@296 -- # x722=() 00:12:21.412 22:57:53 -- nvmf/common.sh@296 -- # local -ga x722 00:12:21.412 22:57:53 -- nvmf/common.sh@297 -- # mlx=() 00:12:21.412 22:57:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:21.412 22:57:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.412 22:57:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.412 22:57:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.413 22:57:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.413 22:57:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.413 22:57:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.413 22:57:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.413 22:57:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.413 22:57:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.413 22:57:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.413 22:57:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.413 22:57:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:21.413 22:57:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:21.413 22:57:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:21.413 22:57:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:21.413 22:57:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:21.413 22:57:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:21.413 22:57:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:21.413 22:57:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:21.413 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:21.413 22:57:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:21.413 22:57:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:21.413 22:57:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.413 22:57:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.413 22:57:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:21.413 22:57:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:21.413 22:57:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:21.413 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:21.413 22:57:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:21.413 22:57:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:21.413 22:57:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.413 22:57:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.413 22:57:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:21.413 22:57:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:21.413 22:57:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:21.413 22:57:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:21.413 22:57:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:21.413 22:57:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.413 22:57:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:21.413 22:57:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.413 22:57:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:21.413 Found net devices under 0000:af:00.0: cvl_0_0 00:12:21.413 22:57:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.413 22:57:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:21.413 22:57:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.413 22:57:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:21.413 22:57:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.413 22:57:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:21.413 Found net devices under 0000:af:00.1: cvl_0_1 00:12:21.413 22:57:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.413 22:57:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:21.413 22:57:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:21.413 22:57:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:21.413 22:57:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:21.413 22:57:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:21.413 22:57:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.413 22:57:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.413 22:57:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.413 22:57:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:21.413 22:57:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.413 22:57:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.413 22:57:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:21.413 22:57:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.413 22:57:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.413 22:57:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:21.413 22:57:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:21.413 22:57:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.413 22:57:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.413 22:57:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:21.413 22:57:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:21.413 22:57:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:21.413 22:57:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.413 22:57:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.413 22:57:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.413 22:57:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:21.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:12:21.413 00:12:21.413 --- 10.0.0.2 ping statistics --- 00:12:21.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.413 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:12:21.413 22:57:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:12:21.413 00:12:21.413 --- 10.0.0.1 ping statistics --- 00:12:21.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.413 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:12:21.413 22:57:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.413 22:57:53 -- nvmf/common.sh@410 -- # return 0 00:12:21.413 22:57:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:21.413 22:57:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.413 22:57:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:21.413 22:57:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:21.413 22:57:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.413 22:57:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:21.413 22:57:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:21.413 22:57:53 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:21.413 22:57:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:21.413 22:57:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:21.413 22:57:53 -- common/autotest_common.sh@10 -- # set +x 00:12:21.413 22:57:53 -- nvmf/common.sh@469 -- # nvmfpid=3117963 00:12:21.413 22:57:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:21.413 22:57:53 -- nvmf/common.sh@470 -- # waitforlisten 3117963 00:12:21.413 22:57:53 -- common/autotest_common.sh@819 -- # '[' -z 3117963 ']' 00:12:21.413 22:57:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.413 22:57:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:21.413 22:57:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.413 22:57:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:21.413 22:57:53 -- common/autotest_common.sh@10 -- # set +x 00:12:21.413 [2024-07-24 22:57:53.503269] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:21.413 [2024-07-24 22:57:53.503316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.413 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.413 [2024-07-24 22:57:53.579422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.413 [2024-07-24 22:57:53.616012] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:21.413 [2024-07-24 22:57:53.616134] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.413 [2024-07-24 22:57:53.616143] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.413 [2024-07-24 22:57:53.616152] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.413 [2024-07-24 22:57:53.616202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.413 [2024-07-24 22:57:53.616299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.413 [2024-07-24 22:57:53.616363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.413 [2024-07-24 22:57:53.616365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.982 22:57:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:21.982 22:57:54 -- common/autotest_common.sh@852 -- # return 0 00:12:21.982 22:57:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:21.982 22:57:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:21.982 22:57:54 -- common/autotest_common.sh@10 -- # set +x 00:12:21.982 22:57:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.982 22:57:54 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:21.982 22:57:54 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:21.982 22:57:54 -- target/multitarget.sh@21 -- # jq length 00:12:22.241 22:57:54 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:22.241 22:57:54 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:22.241 "nvmf_tgt_1" 00:12:22.241 22:57:54 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:22.241 "nvmf_tgt_2" 00:12:22.241 22:57:54 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:22.241 22:57:54 -- target/multitarget.sh@28 -- # jq length 00:12:22.500 22:57:54 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:22.500 22:57:54 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:22.500 true 00:12:22.500 22:57:54 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:22.759 true 00:12:22.759 22:57:54 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:22.759 22:57:54 -- target/multitarget.sh@35 -- # jq length 00:12:22.759 22:57:55 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:22.759 22:57:55 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:22.759 22:57:55 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:22.759 22:57:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:22.759 22:57:55 -- nvmf/common.sh@116 -- # sync 00:12:22.759 22:57:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:22.759 22:57:55 -- nvmf/common.sh@119 -- # set +e 00:12:22.759 22:57:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:22.759 22:57:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:22.759 rmmod nvme_tcp 00:12:22.759 rmmod nvme_fabrics 00:12:22.759 rmmod nvme_keyring 00:12:22.759 22:57:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:22.759 22:57:55 -- nvmf/common.sh@123 -- # set -e 00:12:22.759 22:57:55 -- nvmf/common.sh@124 -- # return 0 00:12:22.759 22:57:55 -- nvmf/common.sh@477 -- # '[' -n 3117963 ']' 00:12:22.759 22:57:55 -- nvmf/common.sh@478 -- # killprocess 3117963 00:12:22.759 22:57:55 -- common/autotest_common.sh@926 -- # '[' -z 3117963 ']' 00:12:22.759 22:57:55 -- common/autotest_common.sh@930 -- # kill -0 3117963 00:12:22.759 22:57:55 -- common/autotest_common.sh@931 -- # uname 00:12:22.759 22:57:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:22.759 22:57:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3117963 00:12:23.018 22:57:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:23.018 22:57:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:23.018 22:57:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3117963' 00:12:23.018 killing process with pid 3117963 00:12:23.018 22:57:55 -- common/autotest_common.sh@945 -- # kill 3117963 00:12:23.018 22:57:55 -- common/autotest_common.sh@950 -- # wait 3117963 00:12:23.019 22:57:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:23.019 22:57:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:23.019 22:57:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:23.019 22:57:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:23.019 22:57:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:23.019 22:57:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.019 22:57:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.019 22:57:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.558 22:57:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:25.558 00:12:25.558 real 0m10.697s 00:12:25.558 user 0m9.388s 00:12:25.558 sys 0m5.571s 00:12:25.558 22:57:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:25.558 22:57:57 -- common/autotest_common.sh@10 -- # set +x 00:12:25.558 ************************************ 00:12:25.558 END TEST nvmf_multitarget 00:12:25.558 ************************************ 00:12:25.558 22:57:57 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:25.558 22:57:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:25.558 22:57:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:25.558 22:57:57 -- common/autotest_common.sh@10 -- # set +x 00:12:25.558 ************************************ 00:12:25.558 START TEST nvmf_rpc 00:12:25.558 ************************************ 00:12:25.558 22:57:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:25.558 * Looking for test storage... 00:12:25.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:25.558 22:57:57 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.558 22:57:57 -- nvmf/common.sh@7 -- # uname -s 00:12:25.558 22:57:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.558 22:57:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.558 22:57:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.558 22:57:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.558 22:57:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.558 22:57:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.558 22:57:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.558 22:57:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.558 22:57:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.558 22:57:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.558 22:57:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:25.558 22:57:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:25.558 22:57:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.558 22:57:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.558 22:57:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.558 22:57:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.558 22:57:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.558 22:57:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.558 22:57:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.558 22:57:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.558 22:57:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.558 22:57:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.558 22:57:57 -- paths/export.sh@5 -- # export PATH 00:12:25.558 22:57:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.558 22:57:57 -- nvmf/common.sh@46 -- # : 0 00:12:25.558 22:57:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:25.558 22:57:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:25.558 22:57:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:25.558 22:57:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.558 22:57:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.558 22:57:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:25.558 22:57:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:25.558 22:57:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:25.558 22:57:57 -- target/rpc.sh@11 -- # loops=5 00:12:25.558 22:57:57 -- target/rpc.sh@23 -- # nvmftestinit 00:12:25.558 22:57:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:25.558 22:57:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.558 22:57:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:25.558 22:57:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:25.558 22:57:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:25.558 22:57:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.558 22:57:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.558 22:57:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.558 22:57:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:25.558 22:57:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:25.558 22:57:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:25.558 22:57:57 -- common/autotest_common.sh@10 -- # set +x 00:12:32.131 22:58:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:32.131 22:58:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:32.131 22:58:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:32.131 22:58:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:32.131 22:58:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:32.131 22:58:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:32.131 22:58:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:32.131 22:58:03 -- nvmf/common.sh@294 -- # net_devs=() 00:12:32.131 22:58:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:32.131 22:58:03 -- nvmf/common.sh@295 -- # e810=() 00:12:32.131 22:58:03 -- nvmf/common.sh@295 -- # local -ga e810 00:12:32.131 22:58:03 -- nvmf/common.sh@296 -- # x722=() 00:12:32.131 22:58:03 -- nvmf/common.sh@296 -- # local -ga x722 00:12:32.131 22:58:03 -- nvmf/common.sh@297 -- # mlx=() 00:12:32.131 22:58:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:32.131 22:58:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.131 22:58:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.131 22:58:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.131 22:58:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.131 22:58:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.131 22:58:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.131 22:58:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.131 22:58:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.131 22:58:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.131 22:58:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.131 22:58:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.131 22:58:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:32.131 22:58:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:32.131 22:58:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:32.131 22:58:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:32.131 22:58:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:32.131 22:58:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:32.131 22:58:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:32.131 22:58:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:32.131 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:32.131 22:58:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:32.131 22:58:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:32.131 22:58:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.131 22:58:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.131 22:58:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:32.131 22:58:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:32.131 22:58:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:32.131 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:32.131 22:58:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:32.131 22:58:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:32.131 22:58:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.131 22:58:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.131 22:58:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:32.131 22:58:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:32.131 22:58:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:32.131 22:58:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:32.131 22:58:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:32.131 22:58:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.131 22:58:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:32.131 22:58:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.131 22:58:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:32.131 Found net devices under 0000:af:00.0: cvl_0_0 00:12:32.131 22:58:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.131 22:58:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:32.131 22:58:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.131 22:58:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:32.131 22:58:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.131 22:58:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:32.131 Found net devices under 0000:af:00.1: cvl_0_1 00:12:32.131 22:58:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.131 22:58:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:32.131 22:58:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:32.131 22:58:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:32.131 22:58:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:32.131 22:58:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:32.131 22:58:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.131 22:58:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.131 22:58:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.131 22:58:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:32.131 22:58:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.131 22:58:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.131 22:58:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:32.131 22:58:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.131 22:58:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.131 22:58:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:32.131 22:58:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:32.131 22:58:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.131 22:58:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.131 22:58:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.131 22:58:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.131 22:58:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:32.131 22:58:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.131 22:58:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.131 22:58:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.131 22:58:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:32.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:12:32.131 00:12:32.131 --- 10.0.0.2 ping statistics --- 00:12:32.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.131 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:12:32.131 22:58:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:12:32.131 00:12:32.131 --- 10.0.0.1 ping statistics --- 00:12:32.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.131 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:12:32.131 22:58:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.131 22:58:03 -- nvmf/common.sh@410 -- # return 0 00:12:32.131 22:58:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:32.131 22:58:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.131 22:58:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:32.131 22:58:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:32.132 22:58:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.132 22:58:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:32.132 22:58:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:32.132 22:58:03 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:32.132 22:58:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:32.132 22:58:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:32.132 22:58:03 -- common/autotest_common.sh@10 -- # set +x 00:12:32.132 22:58:03 -- nvmf/common.sh@469 -- # nvmfpid=3121789 00:12:32.132 22:58:03 -- nvmf/common.sh@470 -- # waitforlisten 3121789 00:12:32.132 22:58:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.132 22:58:03 -- common/autotest_common.sh@819 -- # '[' -z 3121789 ']' 00:12:32.132 22:58:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.132 22:58:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:32.132 22:58:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.132 22:58:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:32.132 22:58:03 -- common/autotest_common.sh@10 -- # set +x 00:12:32.132 [2024-07-24 22:58:04.033792] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:32.132 [2024-07-24 22:58:04.033849] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.132 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.132 [2024-07-24 22:58:04.112749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.132 [2024-07-24 22:58:04.153390] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:32.132 [2024-07-24 22:58:04.153499] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.132 [2024-07-24 22:58:04.153508] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.132 [2024-07-24 22:58:04.153517] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.132 [2024-07-24 22:58:04.153560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.132 [2024-07-24 22:58:04.153676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.132 [2024-07-24 22:58:04.153768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.132 [2024-07-24 22:58:04.153770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.701 22:58:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:32.701 22:58:04 -- common/autotest_common.sh@852 -- # return 0 00:12:32.701 22:58:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:32.701 22:58:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:32.701 22:58:04 -- common/autotest_common.sh@10 -- # set +x 00:12:32.701 22:58:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.701 22:58:04 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:32.701 22:58:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:32.701 22:58:04 -- common/autotest_common.sh@10 -- # set +x 00:12:32.701 22:58:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:32.701 22:58:04 -- target/rpc.sh@26 -- # stats='{ 00:12:32.701 "tick_rate": 2500000000, 00:12:32.701 "poll_groups": [ 00:12:32.701 { 00:12:32.701 "name": "nvmf_tgt_poll_group_0", 00:12:32.701 "admin_qpairs": 0, 00:12:32.701 "io_qpairs": 0, 00:12:32.701 "current_admin_qpairs": 0, 00:12:32.701 "current_io_qpairs": 0, 00:12:32.701 "pending_bdev_io": 0, 00:12:32.701 "completed_nvme_io": 0, 00:12:32.701 "transports": [] 00:12:32.701 }, 00:12:32.701 { 00:12:32.701 "name": "nvmf_tgt_poll_group_1", 00:12:32.701 "admin_qpairs": 0, 00:12:32.701 "io_qpairs": 0, 00:12:32.701 "current_admin_qpairs": 0, 00:12:32.701 "current_io_qpairs": 0, 00:12:32.701 "pending_bdev_io": 0, 00:12:32.701 "completed_nvme_io": 0, 00:12:32.701 "transports": [] 00:12:32.701 }, 00:12:32.701 { 00:12:32.701 "name": "nvmf_tgt_poll_group_2", 00:12:32.701 "admin_qpairs": 0, 00:12:32.701 "io_qpairs": 0, 00:12:32.701 "current_admin_qpairs": 0, 00:12:32.701 "current_io_qpairs": 0, 00:12:32.701 "pending_bdev_io": 0, 00:12:32.701 "completed_nvme_io": 0, 00:12:32.701 "transports": [] 00:12:32.701 }, 00:12:32.701 { 00:12:32.701 "name": "nvmf_tgt_poll_group_3", 00:12:32.701 "admin_qpairs": 0, 00:12:32.701 "io_qpairs": 0, 00:12:32.701 "current_admin_qpairs": 0, 00:12:32.701 "current_io_qpairs": 0, 00:12:32.701 "pending_bdev_io": 0, 00:12:32.701 "completed_nvme_io": 0, 00:12:32.701 "transports": [] 00:12:32.701 } 00:12:32.701 ] 00:12:32.701 }' 00:12:32.701 22:58:04 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:32.701 22:58:04 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:32.701 22:58:04 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:32.701 22:58:04 -- target/rpc.sh@15 -- # wc -l 00:12:32.701 22:58:04 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:32.701 22:58:04 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:32.701 22:58:04 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:32.701 22:58:04 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:32.701 22:58:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:32.701 22:58:04 -- common/autotest_common.sh@10 -- # set +x 00:12:32.701 [2024-07-24 22:58:04.995392] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.701 22:58:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:32.701 22:58:05 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:32.701 22:58:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:32.701 22:58:05 -- common/autotest_common.sh@10 -- # set +x 00:12:32.701 22:58:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:32.701 22:58:05 -- target/rpc.sh@33 -- # stats='{ 00:12:32.701 "tick_rate": 2500000000, 00:12:32.701 "poll_groups": [ 00:12:32.701 { 00:12:32.701 "name": "nvmf_tgt_poll_group_0", 00:12:32.701 "admin_qpairs": 0, 00:12:32.701 "io_qpairs": 0, 00:12:32.701 "current_admin_qpairs": 0, 00:12:32.701 "current_io_qpairs": 0, 00:12:32.701 "pending_bdev_io": 0, 00:12:32.701 "completed_nvme_io": 0, 00:12:32.701 "transports": [ 00:12:32.701 { 00:12:32.701 "trtype": "TCP" 00:12:32.701 } 00:12:32.701 ] 00:12:32.701 }, 00:12:32.701 { 00:12:32.701 "name": "nvmf_tgt_poll_group_1", 00:12:32.701 "admin_qpairs": 0, 00:12:32.701 "io_qpairs": 0, 00:12:32.701 "current_admin_qpairs": 0, 00:12:32.701 "current_io_qpairs": 0, 00:12:32.701 "pending_bdev_io": 0, 00:12:32.701 "completed_nvme_io": 0, 00:12:32.701 "transports": [ 00:12:32.701 { 00:12:32.701 "trtype": "TCP" 00:12:32.701 } 00:12:32.701 ] 00:12:32.701 }, 00:12:32.701 { 00:12:32.701 "name": "nvmf_tgt_poll_group_2", 00:12:32.701 "admin_qpairs": 0, 00:12:32.701 "io_qpairs": 0, 00:12:32.701 "current_admin_qpairs": 0, 00:12:32.701 "current_io_qpairs": 0, 00:12:32.701 "pending_bdev_io": 0, 00:12:32.701 "completed_nvme_io": 0, 00:12:32.701 "transports": [ 00:12:32.701 { 00:12:32.701 "trtype": "TCP" 00:12:32.701 } 00:12:32.701 ] 00:12:32.701 }, 00:12:32.701 { 00:12:32.701 "name": "nvmf_tgt_poll_group_3", 00:12:32.701 "admin_qpairs": 0, 00:12:32.701 "io_qpairs": 0, 00:12:32.701 "current_admin_qpairs": 0, 00:12:32.701 "current_io_qpairs": 0, 00:12:32.701 "pending_bdev_io": 0, 00:12:32.701 "completed_nvme_io": 0, 00:12:32.701 "transports": [ 00:12:32.701 { 00:12:32.701 "trtype": "TCP" 00:12:32.701 } 00:12:32.701 ] 00:12:32.701 } 00:12:32.701 ] 00:12:32.701 }' 00:12:32.701 22:58:05 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:32.701 22:58:05 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:32.701 22:58:05 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:32.701 22:58:05 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:32.701 22:58:05 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:32.701 22:58:05 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:32.701 22:58:05 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:32.701 22:58:05 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:32.701 22:58:05 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:32.701 22:58:05 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:32.701 22:58:05 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:32.701 22:58:05 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:32.701 22:58:05 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:32.701 22:58:05 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:32.701 22:58:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:32.701 22:58:05 -- common/autotest_common.sh@10 -- # set +x 00:12:32.961 Malloc1 00:12:32.961 22:58:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:32.961 22:58:05 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:32.961 22:58:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:32.961 22:58:05 -- common/autotest_common.sh@10 -- # set +x 00:12:32.961 22:58:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:32.961 22:58:05 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:32.961 22:58:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:32.961 22:58:05 -- common/autotest_common.sh@10 -- # set +x 00:12:32.961 22:58:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:32.961 22:58:05 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:32.961 22:58:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:32.961 22:58:05 -- common/autotest_common.sh@10 -- # set +x 00:12:32.961 22:58:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:32.961 22:58:05 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.961 22:58:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:32.961 22:58:05 -- common/autotest_common.sh@10 -- # set +x 00:12:32.961 [2024-07-24 22:58:05.178404] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.961 22:58:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:32.961 22:58:05 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:12:32.961 22:58:05 -- common/autotest_common.sh@640 -- # local es=0 00:12:32.961 22:58:05 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:12:32.961 22:58:05 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:32.961 22:58:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:32.961 22:58:05 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:32.961 22:58:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:32.961 22:58:05 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:32.961 22:58:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:32.961 22:58:05 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:32.961 22:58:05 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:32.961 22:58:05 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:12:32.961 [2024-07-24 22:58:05.212945] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:12:32.961 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:32.961 could not add new controller: failed to write to nvme-fabrics device 00:12:32.961 22:58:05 -- common/autotest_common.sh@643 -- # es=1 00:12:32.961 22:58:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:32.961 22:58:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:32.961 22:58:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:32.961 22:58:05 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:32.961 22:58:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:32.961 22:58:05 -- common/autotest_common.sh@10 -- # set +x 00:12:32.961 22:58:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:32.961 22:58:05 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.340 22:58:06 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.340 22:58:06 -- common/autotest_common.sh@1177 -- # local i=0 00:12:34.340 22:58:06 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.340 22:58:06 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:34.340 22:58:06 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:36.245 22:58:08 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:36.245 22:58:08 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:36.245 22:58:08 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.245 22:58:08 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:36.245 22:58:08 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.245 22:58:08 -- common/autotest_common.sh@1187 -- # return 0 00:12:36.245 22:58:08 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.504 22:58:08 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:36.504 22:58:08 -- common/autotest_common.sh@1198 -- # local i=0 00:12:36.504 22:58:08 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:36.504 22:58:08 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.504 22:58:08 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:36.504 22:58:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.504 22:58:08 -- common/autotest_common.sh@1210 -- # return 0 00:12:36.504 22:58:08 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:36.504 22:58:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.504 22:58:08 -- common/autotest_common.sh@10 -- # set +x 00:12:36.504 22:58:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.504 22:58:08 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.504 22:58:08 -- common/autotest_common.sh@640 -- # local es=0 00:12:36.505 22:58:08 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.505 22:58:08 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:36.505 22:58:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:36.505 22:58:08 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:36.505 22:58:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:36.505 22:58:08 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:36.505 22:58:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:36.505 22:58:08 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:36.505 22:58:08 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:36.505 22:58:08 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.505 [2024-07-24 22:58:08.771159] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:12:36.505 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:36.505 could not add new controller: failed to write to nvme-fabrics device 00:12:36.505 22:58:08 -- common/autotest_common.sh@643 -- # es=1 00:12:36.505 22:58:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:36.505 22:58:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:36.505 22:58:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:36.505 22:58:08 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:36.505 22:58:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.505 22:58:08 -- common/autotest_common.sh@10 -- # set +x 00:12:36.505 22:58:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.505 22:58:08 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.917 22:58:10 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:37.917 22:58:10 -- common/autotest_common.sh@1177 -- # local i=0 00:12:37.917 22:58:10 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.917 22:58:10 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:37.917 22:58:10 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:39.823 22:58:12 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:39.823 22:58:12 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:39.823 22:58:12 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.823 22:58:12 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:39.823 22:58:12 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.823 22:58:12 -- common/autotest_common.sh@1187 -- # return 0 00:12:39.823 22:58:12 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.081 22:58:12 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.081 22:58:12 -- common/autotest_common.sh@1198 -- # local i=0 00:12:40.081 22:58:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:40.081 22:58:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.082 22:58:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:40.082 22:58:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.082 22:58:12 -- common/autotest_common.sh@1210 -- # return 0 00:12:40.082 22:58:12 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.082 22:58:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.082 22:58:12 -- common/autotest_common.sh@10 -- # set +x 00:12:40.082 22:58:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.082 22:58:12 -- target/rpc.sh@81 -- # seq 1 5 00:12:40.082 22:58:12 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:40.082 22:58:12 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.082 22:58:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.082 22:58:12 -- common/autotest_common.sh@10 -- # set +x 00:12:40.082 22:58:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.082 22:58:12 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.082 22:58:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.082 22:58:12 -- common/autotest_common.sh@10 -- # set +x 00:12:40.082 [2024-07-24 22:58:12.327271] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.082 22:58:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.082 22:58:12 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:40.082 22:58:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.082 22:58:12 -- common/autotest_common.sh@10 -- # set +x 00:12:40.082 22:58:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.082 22:58:12 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.082 22:58:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.082 22:58:12 -- common/autotest_common.sh@10 -- # set +x 00:12:40.082 22:58:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.082 22:58:12 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.459 22:58:13 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.459 22:58:13 -- common/autotest_common.sh@1177 -- # local i=0 00:12:41.459 22:58:13 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.459 22:58:13 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:41.459 22:58:13 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:43.370 22:58:15 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:43.370 22:58:15 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:43.370 22:58:15 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.370 22:58:15 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:43.370 22:58:15 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.370 22:58:15 -- common/autotest_common.sh@1187 -- # return 0 00:12:43.370 22:58:15 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.370 22:58:15 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.370 22:58:15 -- common/autotest_common.sh@1198 -- # local i=0 00:12:43.370 22:58:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:43.370 22:58:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.629 22:58:15 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:43.629 22:58:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.629 22:58:15 -- common/autotest_common.sh@1210 -- # return 0 00:12:43.629 22:58:15 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:43.629 22:58:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.629 22:58:15 -- common/autotest_common.sh@10 -- # set +x 00:12:43.629 22:58:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.629 22:58:15 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.629 22:58:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.629 22:58:15 -- common/autotest_common.sh@10 -- # set +x 00:12:43.629 22:58:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.629 22:58:15 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:43.629 22:58:15 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.629 22:58:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.629 22:58:15 -- common/autotest_common.sh@10 -- # set +x 00:12:43.629 22:58:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.629 22:58:15 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.629 22:58:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.629 22:58:15 -- common/autotest_common.sh@10 -- # set +x 00:12:43.629 [2024-07-24 22:58:15.858586] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.629 22:58:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.629 22:58:15 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:43.629 22:58:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.629 22:58:15 -- common/autotest_common.sh@10 -- # set +x 00:12:43.629 22:58:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.629 22:58:15 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.629 22:58:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.629 22:58:15 -- common/autotest_common.sh@10 -- # set +x 00:12:43.629 22:58:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.629 22:58:15 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.006 22:58:17 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.006 22:58:17 -- common/autotest_common.sh@1177 -- # local i=0 00:12:45.006 22:58:17 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.006 22:58:17 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:45.006 22:58:17 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:46.913 22:58:19 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:46.913 22:58:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:46.913 22:58:19 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.913 22:58:19 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:46.913 22:58:19 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.913 22:58:19 -- common/autotest_common.sh@1187 -- # return 0 00:12:46.913 22:58:19 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.913 22:58:19 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.913 22:58:19 -- common/autotest_common.sh@1198 -- # local i=0 00:12:46.913 22:58:19 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:46.913 22:58:19 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.913 22:58:19 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:46.913 22:58:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.913 22:58:19 -- common/autotest_common.sh@1210 -- # return 0 00:12:46.913 22:58:19 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.913 22:58:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.913 22:58:19 -- common/autotest_common.sh@10 -- # set +x 00:12:46.913 22:58:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.913 22:58:19 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.913 22:58:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.913 22:58:19 -- common/autotest_common.sh@10 -- # set +x 00:12:46.913 22:58:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.913 22:58:19 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:46.913 22:58:19 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.913 22:58:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.913 22:58:19 -- common/autotest_common.sh@10 -- # set +x 00:12:46.913 22:58:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.913 22:58:19 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.913 22:58:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.913 22:58:19 -- common/autotest_common.sh@10 -- # set +x 00:12:47.172 [2024-07-24 22:58:19.344374] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.172 22:58:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.172 22:58:19 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:47.172 22:58:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.172 22:58:19 -- common/autotest_common.sh@10 -- # set +x 00:12:47.172 22:58:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.172 22:58:19 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.172 22:58:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.172 22:58:19 -- common/autotest_common.sh@10 -- # set +x 00:12:47.172 22:58:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.172 22:58:19 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.550 22:58:20 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.550 22:58:20 -- common/autotest_common.sh@1177 -- # local i=0 00:12:48.551 22:58:20 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.551 22:58:20 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:48.551 22:58:20 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:50.455 22:58:22 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:50.455 22:58:22 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:50.455 22:58:22 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.455 22:58:22 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:50.455 22:58:22 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.455 22:58:22 -- common/autotest_common.sh@1187 -- # return 0 00:12:50.455 22:58:22 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.714 22:58:22 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:50.714 22:58:22 -- common/autotest_common.sh@1198 -- # local i=0 00:12:50.714 22:58:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:50.714 22:58:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.714 22:58:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:50.714 22:58:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.714 22:58:22 -- common/autotest_common.sh@1210 -- # return 0 00:12:50.714 22:58:22 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:50.714 22:58:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.714 22:58:22 -- common/autotest_common.sh@10 -- # set +x 00:12:50.714 22:58:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.714 22:58:22 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.714 22:58:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.714 22:58:22 -- common/autotest_common.sh@10 -- # set +x 00:12:50.714 22:58:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.714 22:58:22 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:50.714 22:58:22 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:50.714 22:58:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.714 22:58:22 -- common/autotest_common.sh@10 -- # set +x 00:12:50.714 22:58:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.714 22:58:22 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.714 22:58:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.714 22:58:22 -- common/autotest_common.sh@10 -- # set +x 00:12:50.714 [2024-07-24 22:58:22.966266] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.714 22:58:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.714 22:58:22 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:50.714 22:58:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.714 22:58:22 -- common/autotest_common.sh@10 -- # set +x 00:12:50.714 22:58:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.714 22:58:22 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:50.714 22:58:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.714 22:58:22 -- common/autotest_common.sh@10 -- # set +x 00:12:50.714 22:58:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.714 22:58:22 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.091 22:58:24 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.091 22:58:24 -- common/autotest_common.sh@1177 -- # local i=0 00:12:52.091 22:58:24 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.091 22:58:24 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:52.091 22:58:24 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:53.996 22:58:26 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:53.996 22:58:26 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:53.996 22:58:26 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.996 22:58:26 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:53.996 22:58:26 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.996 22:58:26 -- common/autotest_common.sh@1187 -- # return 0 00:12:53.996 22:58:26 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.996 22:58:26 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.996 22:58:26 -- common/autotest_common.sh@1198 -- # local i=0 00:12:53.996 22:58:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.996 22:58:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:53.997 22:58:26 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:53.997 22:58:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.255 22:58:26 -- common/autotest_common.sh@1210 -- # return 0 00:12:54.255 22:58:26 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:54.255 22:58:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.256 22:58:26 -- common/autotest_common.sh@10 -- # set +x 00:12:54.256 22:58:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.256 22:58:26 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.256 22:58:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.256 22:58:26 -- common/autotest_common.sh@10 -- # set +x 00:12:54.256 22:58:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.256 22:58:26 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:54.256 22:58:26 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.256 22:58:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.256 22:58:26 -- common/autotest_common.sh@10 -- # set +x 00:12:54.256 22:58:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.256 22:58:26 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.256 22:58:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.256 22:58:26 -- common/autotest_common.sh@10 -- # set +x 00:12:54.256 [2024-07-24 22:58:26.473610] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.256 22:58:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.256 22:58:26 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:54.256 22:58:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.256 22:58:26 -- common/autotest_common.sh@10 -- # set +x 00:12:54.256 22:58:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.256 22:58:26 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.256 22:58:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.256 22:58:26 -- common/autotest_common.sh@10 -- # set +x 00:12:54.256 22:58:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.256 22:58:26 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.669 22:58:27 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.669 22:58:27 -- common/autotest_common.sh@1177 -- # local i=0 00:12:55.669 22:58:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.669 22:58:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:55.669 22:58:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:57.574 22:58:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:57.574 22:58:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:57.574 22:58:29 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.574 22:58:29 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:57.574 22:58:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.574 22:58:29 -- common/autotest_common.sh@1187 -- # return 0 00:12:57.574 22:58:29 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:57.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.574 22:58:29 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:57.574 22:58:29 -- common/autotest_common.sh@1198 -- # local i=0 00:12:57.574 22:58:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:57.574 22:58:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.574 22:58:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:57.574 22:58:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.574 22:58:29 -- common/autotest_common.sh@1210 -- # return 0 00:12:57.574 22:58:29 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:57.574 22:58:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.574 22:58:29 -- common/autotest_common.sh@10 -- # set +x 00:12:57.574 22:58:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.574 22:58:29 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.574 22:58:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.574 22:58:29 -- common/autotest_common.sh@10 -- # set +x 00:12:57.574 22:58:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.574 22:58:29 -- target/rpc.sh@99 -- # seq 1 5 00:12:57.574 22:58:29 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.574 22:58:29 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.574 22:58:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.574 22:58:29 -- common/autotest_common.sh@10 -- # set +x 00:12:57.574 22:58:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.574 22:58:29 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.574 22:58:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.574 22:58:29 -- common/autotest_common.sh@10 -- # set +x 00:12:57.574 [2024-07-24 22:58:29.971352] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.574 22:58:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.574 22:58:29 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.574 22:58:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.574 22:58:29 -- common/autotest_common.sh@10 -- # set +x 00:12:57.574 22:58:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.574 22:58:29 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.574 22:58:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.574 22:58:29 -- common/autotest_common.sh@10 -- # set +x 00:12:57.574 22:58:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.574 22:58:29 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.574 22:58:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.574 22:58:29 -- common/autotest_common.sh@10 -- # set +x 00:12:57.574 22:58:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.574 22:58:29 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.574 22:58:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.574 22:58:29 -- common/autotest_common.sh@10 -- # set +x 00:12:57.833 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.834 22:58:30 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 [2024-07-24 22:58:30.019466] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.834 22:58:30 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 [2024-07-24 22:58:30.071653] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.834 22:58:30 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 [2024-07-24 22:58:30.123815] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.834 22:58:30 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 [2024-07-24 22:58:30.171957] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:57.834 22:58:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.834 22:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 22:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.834 22:58:30 -- target/rpc.sh@110 -- # stats='{ 00:12:57.834 "tick_rate": 2500000000, 00:12:57.834 "poll_groups": [ 00:12:57.834 { 00:12:57.834 "name": "nvmf_tgt_poll_group_0", 00:12:57.834 "admin_qpairs": 2, 00:12:57.834 "io_qpairs": 196, 00:12:57.834 "current_admin_qpairs": 0, 00:12:57.834 "current_io_qpairs": 0, 00:12:57.834 "pending_bdev_io": 0, 00:12:57.834 "completed_nvme_io": 198, 00:12:57.834 "transports": [ 00:12:57.834 { 00:12:57.834 "trtype": "TCP" 00:12:57.834 } 00:12:57.834 ] 00:12:57.834 }, 00:12:57.834 { 00:12:57.834 "name": "nvmf_tgt_poll_group_1", 00:12:57.834 "admin_qpairs": 2, 00:12:57.834 "io_qpairs": 196, 00:12:57.834 "current_admin_qpairs": 0, 00:12:57.834 "current_io_qpairs": 0, 00:12:57.834 "pending_bdev_io": 0, 00:12:57.834 "completed_nvme_io": 242, 00:12:57.834 "transports": [ 00:12:57.834 { 00:12:57.834 "trtype": "TCP" 00:12:57.834 } 00:12:57.834 ] 00:12:57.834 }, 00:12:57.834 { 00:12:57.834 "name": "nvmf_tgt_poll_group_2", 00:12:57.834 "admin_qpairs": 1, 00:12:57.834 "io_qpairs": 196, 00:12:57.834 "current_admin_qpairs": 0, 00:12:57.834 "current_io_qpairs": 0, 00:12:57.834 "pending_bdev_io": 0, 00:12:57.834 "completed_nvme_io": 393, 00:12:57.834 "transports": [ 00:12:57.834 { 00:12:57.834 "trtype": "TCP" 00:12:57.834 } 00:12:57.834 ] 00:12:57.834 }, 00:12:57.834 { 00:12:57.834 "name": "nvmf_tgt_poll_group_3", 00:12:57.834 "admin_qpairs": 2, 00:12:57.834 "io_qpairs": 196, 00:12:57.834 "current_admin_qpairs": 0, 00:12:57.834 "current_io_qpairs": 0, 00:12:57.834 "pending_bdev_io": 0, 00:12:57.834 "completed_nvme_io": 301, 00:12:57.834 "transports": [ 00:12:57.834 { 00:12:57.834 "trtype": "TCP" 00:12:57.834 } 00:12:57.834 ] 00:12:57.834 } 00:12:57.834 ] 00:12:57.834 }' 00:12:57.834 22:58:30 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:57.834 22:58:30 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:57.834 22:58:30 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:57.835 22:58:30 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:58.094 22:58:30 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:58.094 22:58:30 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:58.094 22:58:30 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:58.094 22:58:30 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:58.094 22:58:30 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:58.094 22:58:30 -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:12:58.094 22:58:30 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:58.094 22:58:30 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:58.094 22:58:30 -- target/rpc.sh@123 -- # nvmftestfini 00:12:58.094 22:58:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:58.094 22:58:30 -- nvmf/common.sh@116 -- # sync 00:12:58.094 22:58:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:58.094 22:58:30 -- nvmf/common.sh@119 -- # set +e 00:12:58.094 22:58:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:58.094 22:58:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:58.094 rmmod nvme_tcp 00:12:58.094 rmmod nvme_fabrics 00:12:58.094 rmmod nvme_keyring 00:12:58.094 22:58:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:58.094 22:58:30 -- nvmf/common.sh@123 -- # set -e 00:12:58.094 22:58:30 -- nvmf/common.sh@124 -- # return 0 00:12:58.094 22:58:30 -- nvmf/common.sh@477 -- # '[' -n 3121789 ']' 00:12:58.094 22:58:30 -- nvmf/common.sh@478 -- # killprocess 3121789 00:12:58.094 22:58:30 -- common/autotest_common.sh@926 -- # '[' -z 3121789 ']' 00:12:58.094 22:58:30 -- common/autotest_common.sh@930 -- # kill -0 3121789 00:12:58.094 22:58:30 -- common/autotest_common.sh@931 -- # uname 00:12:58.094 22:58:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:58.094 22:58:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3121789 00:12:58.094 22:58:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:58.094 22:58:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:58.094 22:58:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3121789' 00:12:58.094 killing process with pid 3121789 00:12:58.094 22:58:30 -- common/autotest_common.sh@945 -- # kill 3121789 00:12:58.094 22:58:30 -- common/autotest_common.sh@950 -- # wait 3121789 00:12:58.353 22:58:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:58.353 22:58:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:58.353 22:58:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:58.353 22:58:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.353 22:58:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:58.353 22:58:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.353 22:58:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.353 22:58:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.889 22:58:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:00.889 00:13:00.889 real 0m35.218s 00:13:00.889 user 1m46.364s 00:13:00.889 sys 0m7.887s 00:13:00.889 22:58:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:00.889 22:58:32 -- common/autotest_common.sh@10 -- # set +x 00:13:00.889 ************************************ 00:13:00.889 END TEST nvmf_rpc 00:13:00.889 ************************************ 00:13:00.889 22:58:32 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:00.889 22:58:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:00.889 22:58:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:00.889 22:58:32 -- common/autotest_common.sh@10 -- # set +x 00:13:00.889 ************************************ 00:13:00.889 START TEST nvmf_invalid 00:13:00.889 ************************************ 00:13:00.889 22:58:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:00.889 * Looking for test storage... 00:13:00.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.889 22:58:32 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.890 22:58:32 -- nvmf/common.sh@7 -- # uname -s 00:13:00.890 22:58:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.890 22:58:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.890 22:58:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.890 22:58:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.890 22:58:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.890 22:58:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.890 22:58:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.890 22:58:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.890 22:58:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.890 22:58:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.890 22:58:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:00.890 22:58:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:00.890 22:58:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.890 22:58:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.890 22:58:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.890 22:58:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.890 22:58:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.890 22:58:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.890 22:58:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.890 22:58:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.890 22:58:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.890 22:58:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.890 22:58:32 -- paths/export.sh@5 -- # export PATH 00:13:00.890 22:58:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.890 22:58:32 -- nvmf/common.sh@46 -- # : 0 00:13:00.890 22:58:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:00.890 22:58:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:00.890 22:58:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:00.890 22:58:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.890 22:58:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.890 22:58:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:00.890 22:58:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:00.890 22:58:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:00.890 22:58:32 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:00.890 22:58:32 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:00.890 22:58:32 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:00.890 22:58:32 -- target/invalid.sh@14 -- # target=foobar 00:13:00.890 22:58:32 -- target/invalid.sh@16 -- # RANDOM=0 00:13:00.890 22:58:32 -- target/invalid.sh@34 -- # nvmftestinit 00:13:00.890 22:58:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:00.890 22:58:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.890 22:58:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:00.890 22:58:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:00.890 22:58:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:00.890 22:58:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.890 22:58:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.890 22:58:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.890 22:58:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:00.890 22:58:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:00.890 22:58:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:00.890 22:58:32 -- common/autotest_common.sh@10 -- # set +x 00:13:07.459 22:58:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:07.459 22:58:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:07.459 22:58:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:07.459 22:58:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:07.459 22:58:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:07.459 22:58:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:07.459 22:58:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:07.459 22:58:38 -- nvmf/common.sh@294 -- # net_devs=() 00:13:07.459 22:58:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:07.459 22:58:38 -- nvmf/common.sh@295 -- # e810=() 00:13:07.459 22:58:38 -- nvmf/common.sh@295 -- # local -ga e810 00:13:07.459 22:58:38 -- nvmf/common.sh@296 -- # x722=() 00:13:07.459 22:58:38 -- nvmf/common.sh@296 -- # local -ga x722 00:13:07.459 22:58:38 -- nvmf/common.sh@297 -- # mlx=() 00:13:07.459 22:58:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:07.459 22:58:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.459 22:58:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.459 22:58:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.459 22:58:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.459 22:58:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.459 22:58:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.459 22:58:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.459 22:58:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.459 22:58:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.459 22:58:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.459 22:58:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.459 22:58:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:07.459 22:58:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:07.459 22:58:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:07.459 22:58:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:07.459 22:58:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:07.459 22:58:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:07.459 22:58:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:07.459 22:58:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:07.459 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:07.459 22:58:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:07.459 22:58:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:07.459 22:58:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.459 22:58:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.459 22:58:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:07.459 22:58:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:07.459 22:58:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:07.459 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:07.459 22:58:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:07.459 22:58:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:07.459 22:58:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.459 22:58:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.459 22:58:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:07.459 22:58:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:07.459 22:58:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:07.459 22:58:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:07.459 22:58:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:07.459 22:58:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.459 22:58:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:07.459 22:58:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.459 22:58:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:07.459 Found net devices under 0000:af:00.0: cvl_0_0 00:13:07.459 22:58:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.459 22:58:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:07.459 22:58:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.459 22:58:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:07.459 22:58:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.459 22:58:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:07.459 Found net devices under 0000:af:00.1: cvl_0_1 00:13:07.459 22:58:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.459 22:58:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:07.459 22:58:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:07.459 22:58:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:07.459 22:58:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:07.459 22:58:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:07.459 22:58:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.459 22:58:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.459 22:58:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:07.459 22:58:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:07.459 22:58:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:07.459 22:58:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:07.459 22:58:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:07.459 22:58:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:07.459 22:58:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.459 22:58:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:07.459 22:58:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:07.459 22:58:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:07.459 22:58:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:07.459 22:58:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:07.459 22:58:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:07.459 22:58:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:07.459 22:58:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:07.459 22:58:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:07.459 22:58:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:07.459 22:58:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:07.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:13:07.459 00:13:07.459 --- 10.0.0.2 ping statistics --- 00:13:07.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.459 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:13:07.459 22:58:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:07.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:13:07.459 00:13:07.459 --- 10.0.0.1 ping statistics --- 00:13:07.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.459 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:13:07.459 22:58:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.459 22:58:38 -- nvmf/common.sh@410 -- # return 0 00:13:07.459 22:58:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:07.459 22:58:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.459 22:58:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:07.459 22:58:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:07.459 22:58:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.459 22:58:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:07.459 22:58:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:07.459 22:58:38 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:07.459 22:58:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:07.459 22:58:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:07.459 22:58:38 -- common/autotest_common.sh@10 -- # set +x 00:13:07.459 22:58:38 -- nvmf/common.sh@469 -- # nvmfpid=3130119 00:13:07.459 22:58:38 -- nvmf/common.sh@470 -- # waitforlisten 3130119 00:13:07.459 22:58:38 -- common/autotest_common.sh@819 -- # '[' -z 3130119 ']' 00:13:07.459 22:58:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.460 22:58:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:07.460 22:58:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.460 22:58:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:07.460 22:58:38 -- common/autotest_common.sh@10 -- # set +x 00:13:07.460 22:58:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:07.460 [2024-07-24 22:58:39.047470] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:07.460 [2024-07-24 22:58:39.047520] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.460 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.460 [2024-07-24 22:58:39.123331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:07.460 [2024-07-24 22:58:39.162339] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:07.460 [2024-07-24 22:58:39.162448] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.460 [2024-07-24 22:58:39.162458] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.460 [2024-07-24 22:58:39.162467] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.460 [2024-07-24 22:58:39.162510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.460 [2024-07-24 22:58:39.162531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.460 [2024-07-24 22:58:39.162618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.460 [2024-07-24 22:58:39.162620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.460 22:58:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:07.460 22:58:39 -- common/autotest_common.sh@852 -- # return 0 00:13:07.460 22:58:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:07.460 22:58:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:07.460 22:58:39 -- common/autotest_common.sh@10 -- # set +x 00:13:07.460 22:58:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.460 22:58:39 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:07.460 22:58:39 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7823 00:13:07.719 [2024-07-24 22:58:40.038478] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:07.719 22:58:40 -- target/invalid.sh@40 -- # out='request: 00:13:07.719 { 00:13:07.719 "nqn": "nqn.2016-06.io.spdk:cnode7823", 00:13:07.719 "tgt_name": "foobar", 00:13:07.719 "method": "nvmf_create_subsystem", 00:13:07.719 "req_id": 1 00:13:07.719 } 00:13:07.720 Got JSON-RPC error response 00:13:07.720 response: 00:13:07.720 { 00:13:07.720 "code": -32603, 00:13:07.720 "message": "Unable to find target foobar" 00:13:07.720 }' 00:13:07.720 22:58:40 -- target/invalid.sh@41 -- # [[ request: 00:13:07.720 { 00:13:07.720 "nqn": "nqn.2016-06.io.spdk:cnode7823", 00:13:07.720 "tgt_name": "foobar", 00:13:07.720 "method": "nvmf_create_subsystem", 00:13:07.720 "req_id": 1 00:13:07.720 } 00:13:07.720 Got JSON-RPC error response 00:13:07.720 response: 00:13:07.720 { 00:13:07.720 "code": -32603, 00:13:07.720 "message": "Unable to find target foobar" 00:13:07.720 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:07.720 22:58:40 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:07.720 22:58:40 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode12368 00:13:07.979 [2024-07-24 22:58:40.231214] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12368: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:07.979 22:58:40 -- target/invalid.sh@45 -- # out='request: 00:13:07.979 { 00:13:07.979 "nqn": "nqn.2016-06.io.spdk:cnode12368", 00:13:07.979 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:07.979 "method": "nvmf_create_subsystem", 00:13:07.979 "req_id": 1 00:13:07.979 } 00:13:07.979 Got JSON-RPC error response 00:13:07.979 response: 00:13:07.979 { 00:13:07.979 "code": -32602, 00:13:07.979 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:07.979 }' 00:13:07.979 22:58:40 -- target/invalid.sh@46 -- # [[ request: 00:13:07.979 { 00:13:07.979 "nqn": "nqn.2016-06.io.spdk:cnode12368", 00:13:07.979 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:07.979 "method": "nvmf_create_subsystem", 00:13:07.979 "req_id": 1 00:13:07.979 } 00:13:07.979 Got JSON-RPC error response 00:13:07.979 response: 00:13:07.979 { 00:13:07.979 "code": -32602, 00:13:07.979 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:07.979 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:07.979 22:58:40 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:07.979 22:58:40 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14691 00:13:08.238 [2024-07-24 22:58:40.419797] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14691: invalid model number 'SPDK_Controller' 00:13:08.238 22:58:40 -- target/invalid.sh@50 -- # out='request: 00:13:08.238 { 00:13:08.238 "nqn": "nqn.2016-06.io.spdk:cnode14691", 00:13:08.238 "model_number": "SPDK_Controller\u001f", 00:13:08.238 "method": "nvmf_create_subsystem", 00:13:08.238 "req_id": 1 00:13:08.238 } 00:13:08.238 Got JSON-RPC error response 00:13:08.238 response: 00:13:08.238 { 00:13:08.238 "code": -32602, 00:13:08.238 "message": "Invalid MN SPDK_Controller\u001f" 00:13:08.238 }' 00:13:08.238 22:58:40 -- target/invalid.sh@51 -- # [[ request: 00:13:08.238 { 00:13:08.238 "nqn": "nqn.2016-06.io.spdk:cnode14691", 00:13:08.238 "model_number": "SPDK_Controller\u001f", 00:13:08.238 "method": "nvmf_create_subsystem", 00:13:08.238 "req_id": 1 00:13:08.238 } 00:13:08.238 Got JSON-RPC error response 00:13:08.238 response: 00:13:08.238 { 00:13:08.238 "code": -32602, 00:13:08.238 "message": "Invalid MN SPDK_Controller\u001f" 00:13:08.238 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:08.238 22:58:40 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:08.238 22:58:40 -- target/invalid.sh@19 -- # local length=21 ll 00:13:08.238 22:58:40 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:08.238 22:58:40 -- target/invalid.sh@21 -- # local chars 00:13:08.238 22:58:40 -- target/invalid.sh@22 -- # local string 00:13:08.238 22:58:40 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:08.238 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # printf %x 34 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # string+='"' 00:13:08.238 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.238 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # printf %x 61 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # string+== 00:13:08.238 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.238 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # printf %x 51 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # string+=3 00:13:08.238 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.238 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # printf %x 59 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # string+=';' 00:13:08.238 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.238 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # printf %x 63 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # string+='?' 00:13:08.238 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.238 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # printf %x 120 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # string+=x 00:13:08.238 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.238 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # printf %x 35 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # string+='#' 00:13:08.238 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.238 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # printf %x 127 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:08.238 22:58:40 -- target/invalid.sh@25 -- # string+=$'\177' 00:13:08.238 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # printf %x 103 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # string+=g 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # printf %x 45 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # string+=- 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # printf %x 115 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # string+=s 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # printf %x 61 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # string+== 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # printf %x 93 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # string+=']' 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # printf %x 54 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # string+=6 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # printf %x 52 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # string+=4 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # printf %x 105 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # string+=i 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # printf %x 124 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # string+='|' 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # printf %x 34 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # string+='"' 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # printf %x 43 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # string+=+ 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # printf %x 81 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # string+=Q 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # printf %x 74 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:08.239 22:58:40 -- target/invalid.sh@25 -- # string+=J 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.239 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.239 22:58:40 -- target/invalid.sh@28 -- # [[ " == \- ]] 00:13:08.239 22:58:40 -- target/invalid.sh@31 -- # echo '"=3;?x#g-s=]64i|"+QJ' 00:13:08.239 22:58:40 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '"=3;?x#g-s=]64i|"+QJ' nqn.2016-06.io.spdk:cnode2140 00:13:08.498 [2024-07-24 22:58:40.768960] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2140: invalid serial number '"=3;?x#g-s=]64i|"+QJ' 00:13:08.498 22:58:40 -- target/invalid.sh@54 -- # out='request: 00:13:08.498 { 00:13:08.498 "nqn": "nqn.2016-06.io.spdk:cnode2140", 00:13:08.498 "serial_number": "\"=3;?x#\u007fg-s=]64i|\"+QJ", 00:13:08.498 "method": "nvmf_create_subsystem", 00:13:08.498 "req_id": 1 00:13:08.498 } 00:13:08.498 Got JSON-RPC error response 00:13:08.498 response: 00:13:08.498 { 00:13:08.498 "code": -32602, 00:13:08.498 "message": "Invalid SN \"=3;?x#\u007fg-s=]64i|\"+QJ" 00:13:08.498 }' 00:13:08.498 22:58:40 -- target/invalid.sh@55 -- # [[ request: 00:13:08.498 { 00:13:08.498 "nqn": "nqn.2016-06.io.spdk:cnode2140", 00:13:08.498 "serial_number": "\"=3;?x#\u007fg-s=]64i|\"+QJ", 00:13:08.498 "method": "nvmf_create_subsystem", 00:13:08.498 "req_id": 1 00:13:08.498 } 00:13:08.498 Got JSON-RPC error response 00:13:08.498 response: 00:13:08.498 { 00:13:08.498 "code": -32602, 00:13:08.498 "message": "Invalid SN \"=3;?x#\u007fg-s=]64i|\"+QJ" 00:13:08.498 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:08.498 22:58:40 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:08.498 22:58:40 -- target/invalid.sh@19 -- # local length=41 ll 00:13:08.498 22:58:40 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:08.498 22:58:40 -- target/invalid.sh@21 -- # local chars 00:13:08.498 22:58:40 -- target/invalid.sh@22 -- # local string 00:13:08.498 22:58:40 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # printf %x 73 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # string+=I 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # printf %x 121 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # string+=y 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # printf %x 57 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # string+=9 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # printf %x 41 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # string+=')' 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # printf %x 78 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # string+=N 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # printf %x 95 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # string+=_ 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # printf %x 56 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # string+=8 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # printf %x 105 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # string+=i 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # printf %x 49 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # string+=1 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # printf %x 35 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # string+='#' 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # printf %x 62 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # string+='>' 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # printf %x 125 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # string+='}' 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # printf %x 104 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # string+=h 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # printf %x 106 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # string+=j 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # printf %x 113 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # string+=q 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # printf %x 117 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:08.499 22:58:40 -- target/invalid.sh@25 -- # string+=u 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.499 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # printf %x 99 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # string+=c 00:13:08.758 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.758 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # printf %x 121 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # string+=y 00:13:08.758 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.758 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # printf %x 84 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # string+=T 00:13:08.758 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.758 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # printf %x 41 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # string+=')' 00:13:08.758 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.758 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # printf %x 41 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # string+=')' 00:13:08.758 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.758 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # printf %x 60 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # string+='<' 00:13:08.758 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.758 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # printf %x 65 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # string+=A 00:13:08.758 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.758 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # printf %x 32 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # string+=' ' 00:13:08.758 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.758 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # printf %x 67 00:13:08.758 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:08.759 22:58:40 -- target/invalid.sh@25 -- # string+=C 00:13:08.759 22:58:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.759 22:58:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.759 22:58:40 -- target/invalid.sh@25 -- # printf %x 119 00:13:08.759 22:58:40 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # string+=w 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # printf %x 52 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # string+=4 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # printf %x 72 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # string+=H 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # printf %x 62 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # string+='>' 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # printf %x 58 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # string+=: 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # printf %x 90 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # string+=Z 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # printf %x 120 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # string+=x 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # printf %x 36 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # string+='$' 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # printf %x 52 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # string+=4 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # printf %x 91 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # string+='[' 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # printf %x 63 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # string+='?' 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # printf %x 105 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # string+=i 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # printf %x 84 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # string+=T 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # printf %x 94 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # string+='^' 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # printf %x 47 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # string+=/ 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # printf %x 77 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:08.759 22:58:41 -- target/invalid.sh@25 -- # string+=M 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.759 22:58:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.759 22:58:41 -- target/invalid.sh@28 -- # [[ I == \- ]] 00:13:08.759 22:58:41 -- target/invalid.sh@31 -- # echo 'Iy9)N_8i1#>}hjqucyT)):Zx$4[?iT^/M' 00:13:08.759 22:58:41 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Iy9)N_8i1#>}hjqucyT)):Zx$4[?iT^/M' nqn.2016-06.io.spdk:cnode20550 00:13:09.018 [2024-07-24 22:58:41.266678] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20550: invalid model number 'Iy9)N_8i1#>}hjqucyT)):Zx$4[?iT^/M' 00:13:09.018 22:58:41 -- target/invalid.sh@58 -- # out='request: 00:13:09.018 { 00:13:09.018 "nqn": "nqn.2016-06.io.spdk:cnode20550", 00:13:09.018 "model_number": "Iy9)N_8i1#>}hjqucyT)):Zx$4[?iT^/M", 00:13:09.018 "method": "nvmf_create_subsystem", 00:13:09.018 "req_id": 1 00:13:09.018 } 00:13:09.018 Got JSON-RPC error response 00:13:09.018 response: 00:13:09.018 { 00:13:09.018 "code": -32602, 00:13:09.018 "message": "Invalid MN Iy9)N_8i1#>}hjqucyT)):Zx$4[?iT^/M" 00:13:09.018 }' 00:13:09.018 22:58:41 -- target/invalid.sh@59 -- # [[ request: 00:13:09.018 { 00:13:09.018 "nqn": "nqn.2016-06.io.spdk:cnode20550", 00:13:09.018 "model_number": "Iy9)N_8i1#>}hjqucyT)):Zx$4[?iT^/M", 00:13:09.018 "method": "nvmf_create_subsystem", 00:13:09.018 "req_id": 1 00:13:09.018 } 00:13:09.018 Got JSON-RPC error response 00:13:09.018 response: 00:13:09.018 { 00:13:09.018 "code": -32602, 00:13:09.018 "message": "Invalid MN Iy9)N_8i1#>}hjqucyT)):Zx$4[?iT^/M" 00:13:09.018 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:09.018 22:58:41 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:09.281 [2024-07-24 22:58:41.451349] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.282 22:58:41 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:09.282 22:58:41 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:09.282 22:58:41 -- target/invalid.sh@67 -- # echo '' 00:13:09.282 22:58:41 -- target/invalid.sh@67 -- # head -n 1 00:13:09.282 22:58:41 -- target/invalid.sh@67 -- # IP= 00:13:09.282 22:58:41 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:09.540 [2024-07-24 22:58:41.832686] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:09.540 22:58:41 -- target/invalid.sh@69 -- # out='request: 00:13:09.540 { 00:13:09.540 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:09.540 "listen_address": { 00:13:09.540 "trtype": "tcp", 00:13:09.540 "traddr": "", 00:13:09.540 "trsvcid": "4421" 00:13:09.540 }, 00:13:09.540 "method": "nvmf_subsystem_remove_listener", 00:13:09.540 "req_id": 1 00:13:09.540 } 00:13:09.540 Got JSON-RPC error response 00:13:09.540 response: 00:13:09.540 { 00:13:09.540 "code": -32602, 00:13:09.540 "message": "Invalid parameters" 00:13:09.540 }' 00:13:09.540 22:58:41 -- target/invalid.sh@70 -- # [[ request: 00:13:09.540 { 00:13:09.540 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:09.540 "listen_address": { 00:13:09.540 "trtype": "tcp", 00:13:09.540 "traddr": "", 00:13:09.540 "trsvcid": "4421" 00:13:09.540 }, 00:13:09.540 "method": "nvmf_subsystem_remove_listener", 00:13:09.540 "req_id": 1 00:13:09.540 } 00:13:09.540 Got JSON-RPC error response 00:13:09.540 response: 00:13:09.540 { 00:13:09.540 "code": -32602, 00:13:09.540 "message": "Invalid parameters" 00:13:09.540 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:09.540 22:58:41 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5418 -i 0 00:13:09.800 [2024-07-24 22:58:42.017272] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5418: invalid cntlid range [0-65519] 00:13:09.800 22:58:42 -- target/invalid.sh@73 -- # out='request: 00:13:09.800 { 00:13:09.800 "nqn": "nqn.2016-06.io.spdk:cnode5418", 00:13:09.800 "min_cntlid": 0, 00:13:09.800 "method": "nvmf_create_subsystem", 00:13:09.800 "req_id": 1 00:13:09.800 } 00:13:09.800 Got JSON-RPC error response 00:13:09.800 response: 00:13:09.800 { 00:13:09.800 "code": -32602, 00:13:09.800 "message": "Invalid cntlid range [0-65519]" 00:13:09.800 }' 00:13:09.800 22:58:42 -- target/invalid.sh@74 -- # [[ request: 00:13:09.800 { 00:13:09.800 "nqn": "nqn.2016-06.io.spdk:cnode5418", 00:13:09.800 "min_cntlid": 0, 00:13:09.800 "method": "nvmf_create_subsystem", 00:13:09.800 "req_id": 1 00:13:09.800 } 00:13:09.800 Got JSON-RPC error response 00:13:09.800 response: 00:13:09.800 { 00:13:09.800 "code": -32602, 00:13:09.800 "message": "Invalid cntlid range [0-65519]" 00:13:09.800 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:09.800 22:58:42 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27913 -i 65520 00:13:09.800 [2024-07-24 22:58:42.197926] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27913: invalid cntlid range [65520-65519] 00:13:09.800 22:58:42 -- target/invalid.sh@75 -- # out='request: 00:13:09.800 { 00:13:09.800 "nqn": "nqn.2016-06.io.spdk:cnode27913", 00:13:09.800 "min_cntlid": 65520, 00:13:09.800 "method": "nvmf_create_subsystem", 00:13:09.800 "req_id": 1 00:13:09.800 } 00:13:09.800 Got JSON-RPC error response 00:13:09.800 response: 00:13:09.800 { 00:13:09.800 "code": -32602, 00:13:09.800 "message": "Invalid cntlid range [65520-65519]" 00:13:09.800 }' 00:13:09.800 22:58:42 -- target/invalid.sh@76 -- # [[ request: 00:13:09.800 { 00:13:09.800 "nqn": "nqn.2016-06.io.spdk:cnode27913", 00:13:09.800 "min_cntlid": 65520, 00:13:09.800 "method": "nvmf_create_subsystem", 00:13:09.800 "req_id": 1 00:13:09.800 } 00:13:09.800 Got JSON-RPC error response 00:13:09.800 response: 00:13:09.800 { 00:13:09.800 "code": -32602, 00:13:09.800 "message": "Invalid cntlid range [65520-65519]" 00:13:09.800 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:10.068 22:58:42 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16734 -I 0 00:13:10.068 [2024-07-24 22:58:42.378534] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16734: invalid cntlid range [1-0] 00:13:10.068 22:58:42 -- target/invalid.sh@77 -- # out='request: 00:13:10.068 { 00:13:10.068 "nqn": "nqn.2016-06.io.spdk:cnode16734", 00:13:10.068 "max_cntlid": 0, 00:13:10.068 "method": "nvmf_create_subsystem", 00:13:10.068 "req_id": 1 00:13:10.068 } 00:13:10.068 Got JSON-RPC error response 00:13:10.068 response: 00:13:10.068 { 00:13:10.068 "code": -32602, 00:13:10.068 "message": "Invalid cntlid range [1-0]" 00:13:10.068 }' 00:13:10.068 22:58:42 -- target/invalid.sh@78 -- # [[ request: 00:13:10.068 { 00:13:10.068 "nqn": "nqn.2016-06.io.spdk:cnode16734", 00:13:10.068 "max_cntlid": 0, 00:13:10.068 "method": "nvmf_create_subsystem", 00:13:10.068 "req_id": 1 00:13:10.068 } 00:13:10.068 Got JSON-RPC error response 00:13:10.068 response: 00:13:10.068 { 00:13:10.068 "code": -32602, 00:13:10.068 "message": "Invalid cntlid range [1-0]" 00:13:10.068 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:10.068 22:58:42 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31619 -I 65520 00:13:10.331 [2024-07-24 22:58:42.559178] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31619: invalid cntlid range [1-65520] 00:13:10.331 22:58:42 -- target/invalid.sh@79 -- # out='request: 00:13:10.331 { 00:13:10.331 "nqn": "nqn.2016-06.io.spdk:cnode31619", 00:13:10.331 "max_cntlid": 65520, 00:13:10.331 "method": "nvmf_create_subsystem", 00:13:10.331 "req_id": 1 00:13:10.331 } 00:13:10.331 Got JSON-RPC error response 00:13:10.331 response: 00:13:10.331 { 00:13:10.331 "code": -32602, 00:13:10.331 "message": "Invalid cntlid range [1-65520]" 00:13:10.331 }' 00:13:10.331 22:58:42 -- target/invalid.sh@80 -- # [[ request: 00:13:10.331 { 00:13:10.331 "nqn": "nqn.2016-06.io.spdk:cnode31619", 00:13:10.331 "max_cntlid": 65520, 00:13:10.331 "method": "nvmf_create_subsystem", 00:13:10.331 "req_id": 1 00:13:10.331 } 00:13:10.331 Got JSON-RPC error response 00:13:10.331 response: 00:13:10.331 { 00:13:10.331 "code": -32602, 00:13:10.331 "message": "Invalid cntlid range [1-65520]" 00:13:10.331 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:10.331 22:58:42 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2712 -i 6 -I 5 00:13:10.331 [2024-07-24 22:58:42.723760] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2712: invalid cntlid range [6-5] 00:13:10.331 22:58:42 -- target/invalid.sh@83 -- # out='request: 00:13:10.331 { 00:13:10.331 "nqn": "nqn.2016-06.io.spdk:cnode2712", 00:13:10.331 "min_cntlid": 6, 00:13:10.331 "max_cntlid": 5, 00:13:10.331 "method": "nvmf_create_subsystem", 00:13:10.331 "req_id": 1 00:13:10.331 } 00:13:10.331 Got JSON-RPC error response 00:13:10.331 response: 00:13:10.331 { 00:13:10.331 "code": -32602, 00:13:10.331 "message": "Invalid cntlid range [6-5]" 00:13:10.331 }' 00:13:10.331 22:58:42 -- target/invalid.sh@84 -- # [[ request: 00:13:10.331 { 00:13:10.331 "nqn": "nqn.2016-06.io.spdk:cnode2712", 00:13:10.331 "min_cntlid": 6, 00:13:10.331 "max_cntlid": 5, 00:13:10.331 "method": "nvmf_create_subsystem", 00:13:10.331 "req_id": 1 00:13:10.331 } 00:13:10.331 Got JSON-RPC error response 00:13:10.331 response: 00:13:10.331 { 00:13:10.331 "code": -32602, 00:13:10.331 "message": "Invalid cntlid range [6-5]" 00:13:10.331 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:10.331 22:58:42 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:10.591 22:58:42 -- target/invalid.sh@87 -- # out='request: 00:13:10.591 { 00:13:10.591 "name": "foobar", 00:13:10.591 "method": "nvmf_delete_target", 00:13:10.591 "req_id": 1 00:13:10.591 } 00:13:10.591 Got JSON-RPC error response 00:13:10.591 response: 00:13:10.591 { 00:13:10.591 "code": -32602, 00:13:10.591 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:10.591 }' 00:13:10.591 22:58:42 -- target/invalid.sh@88 -- # [[ request: 00:13:10.591 { 00:13:10.591 "name": "foobar", 00:13:10.591 "method": "nvmf_delete_target", 00:13:10.591 "req_id": 1 00:13:10.591 } 00:13:10.591 Got JSON-RPC error response 00:13:10.591 response: 00:13:10.591 { 00:13:10.591 "code": -32602, 00:13:10.591 "message": "The specified target doesn't exist, cannot delete it." 00:13:10.591 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:10.591 22:58:42 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:10.591 22:58:42 -- target/invalid.sh@91 -- # nvmftestfini 00:13:10.591 22:58:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:10.591 22:58:42 -- nvmf/common.sh@116 -- # sync 00:13:10.591 22:58:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:10.591 22:58:42 -- nvmf/common.sh@119 -- # set +e 00:13:10.591 22:58:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:10.591 22:58:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:10.591 rmmod nvme_tcp 00:13:10.591 rmmod nvme_fabrics 00:13:10.591 rmmod nvme_keyring 00:13:10.591 22:58:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:10.591 22:58:42 -- nvmf/common.sh@123 -- # set -e 00:13:10.591 22:58:42 -- nvmf/common.sh@124 -- # return 0 00:13:10.591 22:58:42 -- nvmf/common.sh@477 -- # '[' -n 3130119 ']' 00:13:10.591 22:58:42 -- nvmf/common.sh@478 -- # killprocess 3130119 00:13:10.591 22:58:42 -- common/autotest_common.sh@926 -- # '[' -z 3130119 ']' 00:13:10.591 22:58:42 -- common/autotest_common.sh@930 -- # kill -0 3130119 00:13:10.591 22:58:42 -- common/autotest_common.sh@931 -- # uname 00:13:10.591 22:58:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:10.591 22:58:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3130119 00:13:10.591 22:58:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:10.591 22:58:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:10.591 22:58:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3130119' 00:13:10.591 killing process with pid 3130119 00:13:10.591 22:58:42 -- common/autotest_common.sh@945 -- # kill 3130119 00:13:10.591 22:58:42 -- common/autotest_common.sh@950 -- # wait 3130119 00:13:10.850 22:58:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:10.850 22:58:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:10.850 22:58:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:10.850 22:58:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:10.850 22:58:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:10.850 22:58:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.850 22:58:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:10.850 22:58:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.388 22:58:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:13.388 00:13:13.388 real 0m12.463s 00:13:13.388 user 0m19.514s 00:13:13.388 sys 0m5.838s 00:13:13.388 22:58:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:13.388 22:58:45 -- common/autotest_common.sh@10 -- # set +x 00:13:13.388 ************************************ 00:13:13.388 END TEST nvmf_invalid 00:13:13.388 ************************************ 00:13:13.388 22:58:45 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:13.388 22:58:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:13.388 22:58:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:13.388 22:58:45 -- common/autotest_common.sh@10 -- # set +x 00:13:13.388 ************************************ 00:13:13.388 START TEST nvmf_abort 00:13:13.388 ************************************ 00:13:13.388 22:58:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:13.388 * Looking for test storage... 00:13:13.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.388 22:58:45 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.388 22:58:45 -- nvmf/common.sh@7 -- # uname -s 00:13:13.388 22:58:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.388 22:58:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.388 22:58:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.388 22:58:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.388 22:58:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.388 22:58:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.388 22:58:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.388 22:58:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.388 22:58:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.388 22:58:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.388 22:58:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:13.388 22:58:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:13.388 22:58:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.388 22:58:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.388 22:58:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.388 22:58:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.388 22:58:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.388 22:58:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.388 22:58:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.388 22:58:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.388 22:58:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.388 22:58:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.388 22:58:45 -- paths/export.sh@5 -- # export PATH 00:13:13.388 22:58:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.388 22:58:45 -- nvmf/common.sh@46 -- # : 0 00:13:13.388 22:58:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:13.388 22:58:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:13.388 22:58:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:13.388 22:58:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.388 22:58:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.388 22:58:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:13.388 22:58:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:13.388 22:58:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:13.388 22:58:45 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:13.388 22:58:45 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:13.388 22:58:45 -- target/abort.sh@14 -- # nvmftestinit 00:13:13.388 22:58:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:13.388 22:58:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.388 22:58:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:13.388 22:58:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:13.388 22:58:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:13.388 22:58:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.388 22:58:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.388 22:58:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.388 22:58:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:13.388 22:58:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:13.389 22:58:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:13.389 22:58:45 -- common/autotest_common.sh@10 -- # set +x 00:13:20.019 22:58:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:20.019 22:58:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:20.019 22:58:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:20.019 22:58:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:20.019 22:58:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:20.019 22:58:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:20.019 22:58:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:20.019 22:58:51 -- nvmf/common.sh@294 -- # net_devs=() 00:13:20.019 22:58:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:20.019 22:58:51 -- nvmf/common.sh@295 -- # e810=() 00:13:20.019 22:58:51 -- nvmf/common.sh@295 -- # local -ga e810 00:13:20.019 22:58:51 -- nvmf/common.sh@296 -- # x722=() 00:13:20.019 22:58:51 -- nvmf/common.sh@296 -- # local -ga x722 00:13:20.019 22:58:51 -- nvmf/common.sh@297 -- # mlx=() 00:13:20.019 22:58:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:20.019 22:58:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.019 22:58:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.019 22:58:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.019 22:58:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.019 22:58:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.019 22:58:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.019 22:58:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.019 22:58:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.019 22:58:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.019 22:58:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.019 22:58:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.019 22:58:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:20.019 22:58:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:20.019 22:58:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:20.019 22:58:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:20.019 22:58:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:20.019 22:58:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:20.019 22:58:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:20.019 22:58:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:20.019 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:20.019 22:58:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:20.019 22:58:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:20.019 22:58:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.019 22:58:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.019 22:58:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:20.019 22:58:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:20.019 22:58:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:20.019 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:20.019 22:58:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:20.019 22:58:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:20.019 22:58:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.019 22:58:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.019 22:58:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:20.019 22:58:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:20.019 22:58:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:20.019 22:58:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:20.019 22:58:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:20.019 22:58:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.019 22:58:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:20.019 22:58:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.019 22:58:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:20.019 Found net devices under 0000:af:00.0: cvl_0_0 00:13:20.019 22:58:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.019 22:58:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:20.019 22:58:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.019 22:58:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:20.019 22:58:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.019 22:58:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:20.019 Found net devices under 0000:af:00.1: cvl_0_1 00:13:20.019 22:58:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.019 22:58:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:20.019 22:58:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:20.019 22:58:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:20.019 22:58:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:20.019 22:58:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:20.019 22:58:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.019 22:58:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.019 22:58:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.019 22:58:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:20.019 22:58:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.019 22:58:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.019 22:58:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:20.019 22:58:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.019 22:58:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.019 22:58:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:20.019 22:58:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:20.019 22:58:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.019 22:58:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:20.019 22:58:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:20.019 22:58:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:20.019 22:58:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:20.019 22:58:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:20.019 22:58:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:20.019 22:58:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:20.019 22:58:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:20.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:13:20.019 00:13:20.019 --- 10.0.0.2 ping statistics --- 00:13:20.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.019 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:13:20.019 22:58:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:20.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:13:20.019 00:13:20.019 --- 10.0.0.1 ping statistics --- 00:13:20.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.019 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:13:20.020 22:58:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.020 22:58:52 -- nvmf/common.sh@410 -- # return 0 00:13:20.020 22:58:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:20.020 22:58:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.020 22:58:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:20.020 22:58:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:20.020 22:58:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.020 22:58:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:20.020 22:58:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:20.020 22:58:52 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:20.020 22:58:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:20.020 22:58:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:20.020 22:58:52 -- common/autotest_common.sh@10 -- # set +x 00:13:20.020 22:58:52 -- nvmf/common.sh@469 -- # nvmfpid=3134539 00:13:20.020 22:58:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:20.020 22:58:52 -- nvmf/common.sh@470 -- # waitforlisten 3134539 00:13:20.020 22:58:52 -- common/autotest_common.sh@819 -- # '[' -z 3134539 ']' 00:13:20.020 22:58:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.020 22:58:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:20.020 22:58:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.020 22:58:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:20.020 22:58:52 -- common/autotest_common.sh@10 -- # set +x 00:13:20.020 [2024-07-24 22:58:52.192538] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:20.020 [2024-07-24 22:58:52.192587] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.020 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.020 [2024-07-24 22:58:52.266931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:20.020 [2024-07-24 22:58:52.303203] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:20.020 [2024-07-24 22:58:52.303317] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.020 [2024-07-24 22:58:52.303328] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.020 [2024-07-24 22:58:52.303338] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.020 [2024-07-24 22:58:52.303378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.020 [2024-07-24 22:58:52.303482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.020 [2024-07-24 22:58:52.303484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.589 22:58:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:20.589 22:58:52 -- common/autotest_common.sh@852 -- # return 0 00:13:20.589 22:58:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:20.589 22:58:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:20.589 22:58:52 -- common/autotest_common.sh@10 -- # set +x 00:13:20.848 22:58:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.848 22:58:53 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:20.848 22:58:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.848 22:58:53 -- common/autotest_common.sh@10 -- # set +x 00:13:20.848 [2024-07-24 22:58:53.034654] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.848 22:58:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.848 22:58:53 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:20.848 22:58:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.848 22:58:53 -- common/autotest_common.sh@10 -- # set +x 00:13:20.848 Malloc0 00:13:20.848 22:58:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.848 22:58:53 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:20.848 22:58:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.848 22:58:53 -- common/autotest_common.sh@10 -- # set +x 00:13:20.848 Delay0 00:13:20.848 22:58:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.848 22:58:53 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:20.848 22:58:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.848 22:58:53 -- common/autotest_common.sh@10 -- # set +x 00:13:20.848 22:58:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.848 22:58:53 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:20.848 22:58:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.848 22:58:53 -- common/autotest_common.sh@10 -- # set +x 00:13:20.848 22:58:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.848 22:58:53 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:20.848 22:58:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.848 22:58:53 -- common/autotest_common.sh@10 -- # set +x 00:13:20.848 [2024-07-24 22:58:53.119104] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.848 22:58:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.848 22:58:53 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:20.848 22:58:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.848 22:58:53 -- common/autotest_common.sh@10 -- # set +x 00:13:20.848 22:58:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.848 22:58:53 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:20.848 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.848 [2024-07-24 22:58:53.233080] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:23.382 [2024-07-24 22:58:55.276091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6730 is same with the state(5) to be set 00:13:23.382 Initializing NVMe Controllers 00:13:23.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:23.382 controller IO queue size 128 less than required 00:13:23.382 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:23.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:23.382 Initialization complete. Launching workers. 00:13:23.382 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 41384 00:13:23.382 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41448, failed to submit 62 00:13:23.382 success 41384, unsuccess 64, failed 0 00:13:23.382 22:58:55 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:23.382 22:58:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.382 22:58:55 -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 22:58:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.382 22:58:55 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:23.382 22:58:55 -- target/abort.sh@38 -- # nvmftestfini 00:13:23.382 22:58:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:23.382 22:58:55 -- nvmf/common.sh@116 -- # sync 00:13:23.382 22:58:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:23.382 22:58:55 -- nvmf/common.sh@119 -- # set +e 00:13:23.382 22:58:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:23.382 22:58:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:23.382 rmmod nvme_tcp 00:13:23.382 rmmod nvme_fabrics 00:13:23.382 rmmod nvme_keyring 00:13:23.382 22:58:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:23.382 22:58:55 -- nvmf/common.sh@123 -- # set -e 00:13:23.382 22:58:55 -- nvmf/common.sh@124 -- # return 0 00:13:23.382 22:58:55 -- nvmf/common.sh@477 -- # '[' -n 3134539 ']' 00:13:23.382 22:58:55 -- nvmf/common.sh@478 -- # killprocess 3134539 00:13:23.382 22:58:55 -- common/autotest_common.sh@926 -- # '[' -z 3134539 ']' 00:13:23.382 22:58:55 -- common/autotest_common.sh@930 -- # kill -0 3134539 00:13:23.382 22:58:55 -- common/autotest_common.sh@931 -- # uname 00:13:23.382 22:58:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:23.382 22:58:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3134539 00:13:23.382 22:58:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:23.382 22:58:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:23.382 22:58:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3134539' 00:13:23.382 killing process with pid 3134539 00:13:23.382 22:58:55 -- common/autotest_common.sh@945 -- # kill 3134539 00:13:23.382 22:58:55 -- common/autotest_common.sh@950 -- # wait 3134539 00:13:23.382 22:58:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:23.382 22:58:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:23.382 22:58:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:23.382 22:58:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.382 22:58:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:23.382 22:58:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.382 22:58:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.382 22:58:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.289 22:58:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:25.289 00:13:25.289 real 0m12.414s 00:13:25.289 user 0m13.171s 00:13:25.289 sys 0m6.283s 00:13:25.289 22:58:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:25.289 22:58:57 -- common/autotest_common.sh@10 -- # set +x 00:13:25.289 ************************************ 00:13:25.289 END TEST nvmf_abort 00:13:25.289 ************************************ 00:13:25.289 22:58:57 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:25.289 22:58:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:25.289 22:58:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:25.289 22:58:57 -- common/autotest_common.sh@10 -- # set +x 00:13:25.289 ************************************ 00:13:25.289 START TEST nvmf_ns_hotplug_stress 00:13:25.289 ************************************ 00:13:25.289 22:58:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:25.549 * Looking for test storage... 00:13:25.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:25.549 22:58:57 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.549 22:58:57 -- nvmf/common.sh@7 -- # uname -s 00:13:25.549 22:58:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.549 22:58:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.549 22:58:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.549 22:58:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.549 22:58:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.549 22:58:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.549 22:58:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.549 22:58:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.549 22:58:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.549 22:58:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.549 22:58:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:25.549 22:58:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:25.549 22:58:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.549 22:58:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.549 22:58:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.549 22:58:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:25.549 22:58:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.549 22:58:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.549 22:58:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.549 22:58:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.549 22:58:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.550 22:58:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.550 22:58:57 -- paths/export.sh@5 -- # export PATH 00:13:25.550 22:58:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.550 22:58:57 -- nvmf/common.sh@46 -- # : 0 00:13:25.550 22:58:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:25.550 22:58:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:25.550 22:58:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:25.550 22:58:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.550 22:58:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.550 22:58:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:25.550 22:58:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:25.550 22:58:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:25.550 22:58:57 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.550 22:58:57 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:25.550 22:58:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:25.550 22:58:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.550 22:58:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:25.550 22:58:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:25.550 22:58:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:25.550 22:58:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.550 22:58:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.550 22:58:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.550 22:58:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:25.550 22:58:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:25.550 22:58:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:25.550 22:58:57 -- common/autotest_common.sh@10 -- # set +x 00:13:32.123 22:59:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:32.123 22:59:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:32.123 22:59:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:32.123 22:59:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:32.123 22:59:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:32.123 22:59:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:32.123 22:59:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:32.123 22:59:04 -- nvmf/common.sh@294 -- # net_devs=() 00:13:32.123 22:59:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:32.123 22:59:04 -- nvmf/common.sh@295 -- # e810=() 00:13:32.123 22:59:04 -- nvmf/common.sh@295 -- # local -ga e810 00:13:32.123 22:59:04 -- nvmf/common.sh@296 -- # x722=() 00:13:32.123 22:59:04 -- nvmf/common.sh@296 -- # local -ga x722 00:13:32.123 22:59:04 -- nvmf/common.sh@297 -- # mlx=() 00:13:32.123 22:59:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:32.123 22:59:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:32.123 22:59:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:32.123 22:59:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:32.123 22:59:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:32.123 22:59:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:32.123 22:59:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:32.123 22:59:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:32.123 22:59:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:32.123 22:59:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:32.123 22:59:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:32.123 22:59:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:32.123 22:59:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:32.123 22:59:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:32.123 22:59:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:32.123 22:59:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:32.123 22:59:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:32.123 22:59:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:32.123 22:59:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:32.123 22:59:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:32.123 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:32.123 22:59:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:32.123 22:59:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:32.123 22:59:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.123 22:59:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.123 22:59:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:32.123 22:59:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:32.123 22:59:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:32.123 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:32.123 22:59:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:32.123 22:59:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:32.123 22:59:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.123 22:59:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.123 22:59:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:32.123 22:59:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:32.123 22:59:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:32.123 22:59:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:32.123 22:59:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:32.123 22:59:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.123 22:59:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:32.123 22:59:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.123 22:59:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:32.123 Found net devices under 0000:af:00.0: cvl_0_0 00:13:32.123 22:59:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.123 22:59:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:32.123 22:59:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.123 22:59:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:32.123 22:59:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.123 22:59:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:32.123 Found net devices under 0000:af:00.1: cvl_0_1 00:13:32.123 22:59:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.123 22:59:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:32.123 22:59:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:32.123 22:59:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:32.123 22:59:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:32.123 22:59:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:32.123 22:59:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.123 22:59:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.123 22:59:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:32.123 22:59:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:32.123 22:59:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:32.123 22:59:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:32.123 22:59:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:32.123 22:59:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:32.123 22:59:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.123 22:59:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:32.123 22:59:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:32.123 22:59:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:32.123 22:59:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:32.382 22:59:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:32.382 22:59:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:32.383 22:59:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:32.383 22:59:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:32.383 22:59:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:32.383 22:59:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:32.383 22:59:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:32.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:13:32.383 00:13:32.383 --- 10.0.0.2 ping statistics --- 00:13:32.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.383 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:13:32.383 22:59:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:32.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:13:32.383 00:13:32.383 --- 10.0.0.1 ping statistics --- 00:13:32.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.383 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:13:32.383 22:59:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.383 22:59:04 -- nvmf/common.sh@410 -- # return 0 00:13:32.383 22:59:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:32.383 22:59:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.383 22:59:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:32.383 22:59:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:32.383 22:59:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.383 22:59:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:32.383 22:59:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:32.383 22:59:04 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:32.383 22:59:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:32.383 22:59:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:32.383 22:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:32.383 22:59:04 -- nvmf/common.sh@469 -- # nvmfpid=3138842 00:13:32.383 22:59:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:32.383 22:59:04 -- nvmf/common.sh@470 -- # waitforlisten 3138842 00:13:32.383 22:59:04 -- common/autotest_common.sh@819 -- # '[' -z 3138842 ']' 00:13:32.383 22:59:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.383 22:59:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:32.383 22:59:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.383 22:59:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:32.383 22:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:32.641 [2024-07-24 22:59:04.819682] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:32.641 [2024-07-24 22:59:04.819735] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.641 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.641 [2024-07-24 22:59:04.898051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:32.641 [2024-07-24 22:59:04.934006] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:32.641 [2024-07-24 22:59:04.934119] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.641 [2024-07-24 22:59:04.934129] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.641 [2024-07-24 22:59:04.934139] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.641 [2024-07-24 22:59:04.934188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.641 [2024-07-24 22:59:04.934270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:32.641 [2024-07-24 22:59:04.934271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.209 22:59:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:33.209 22:59:05 -- common/autotest_common.sh@852 -- # return 0 00:13:33.209 22:59:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:33.209 22:59:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:33.209 22:59:05 -- common/autotest_common.sh@10 -- # set +x 00:13:33.469 22:59:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.469 22:59:05 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:33.469 22:59:05 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:33.469 [2024-07-24 22:59:05.809269] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.469 22:59:05 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:33.728 22:59:06 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.728 [2024-07-24 22:59:06.146891] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.987 22:59:06 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:33.987 22:59:06 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:34.247 Malloc0 00:13:34.247 22:59:06 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:34.506 Delay0 00:13:34.506 22:59:06 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.506 22:59:06 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:34.765 NULL1 00:13:34.765 22:59:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:35.025 22:59:07 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:35.025 22:59:07 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3139355 00:13:35.025 22:59:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:35.025 22:59:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.025 EAL: No free 2048 kB hugepages reported on node 1 00:13:36.403 Read completed with error (sct=0, sc=11) 00:13:36.403 22:59:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.403 22:59:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:36.403 22:59:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:36.403 true 00:13:36.403 22:59:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:36.403 22:59:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.364 22:59:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.658 22:59:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:37.658 22:59:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:37.658 true 00:13:37.658 22:59:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:37.658 22:59:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.917 22:59:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.917 22:59:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:37.917 22:59:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:38.175 true 00:13:38.175 22:59:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:38.175 22:59:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.444 22:59:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.444 [2024-07-24 22:59:10.798466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.798565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.798613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.798662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.798708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.798760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.798819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.798867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.798917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.798965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.799026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.799071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.799124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.799174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.799224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.799276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.799329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.799385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.799433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.799485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.799533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.799575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.799621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.799679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.799725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.799767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.799808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.799852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.799883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.799927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.799970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.800966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.801017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.801066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.801117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.801170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.801225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.801277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.801336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.801389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.801440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.801494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.801831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.801879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.801925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.801971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.802018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.802062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.802107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.802151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.802195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.802226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.802270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.802305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.444 [2024-07-24 22:59:10.802339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.802373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.802405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.802442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.802486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.802532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.802566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.802597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.802628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.802658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.802690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.802725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.802755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.802786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.802820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.802853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.802885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.802934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.802982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.803033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.803085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.803136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.803183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.803231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.803282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.803330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.803382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.803435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.803487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.803541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.803587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.803635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.803683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.803742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.803788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.803837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.803885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.803934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.803985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.804042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.804097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.804152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.804203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.804250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.804304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.804347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.804392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.804440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.804487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.804531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.804572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.804619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.804955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.804991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.805035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.805075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.805113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.805155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.805199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.805242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.805294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.805346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.805403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.805453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.805508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.805557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.805603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.805652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.805706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.805764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.805817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.805871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.805921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.805966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.806023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.806071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.806123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.806175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.806229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.806283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.806332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.806384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.806432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.806490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.806539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.806592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.806644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.806695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.806748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.806795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.806839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.806888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.806940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.806993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.807044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.445 [2024-07-24 22:59:10.807091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.807143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.807197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.807255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.807301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.807352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.807399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.807440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.807486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.807532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.807572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.807612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.807655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.807699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.807735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.807783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.807837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.807882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.807924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.807977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.808994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.809977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.810022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.810065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.810107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.810141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.810182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.810223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.810263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.810308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.810353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.810401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.810452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.810501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.810557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.810611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.810657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.810705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.810759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.810812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.811150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.811202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.811251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.811301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.811350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.811401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.811452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.811511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.811566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:38.446 [2024-07-24 22:59:10.811625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.811673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.446 [2024-07-24 22:59:10.811732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.811793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.811840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.811890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.811936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.811984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.812033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.812086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.812135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.812185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.812241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.812288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.812342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.812399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.812444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.812491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.812537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.812592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.812642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.812683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.812734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.812780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.812828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.812869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.812913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.812954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.812987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.813952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.814005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.814052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.814103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.814446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.814505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.814556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.814589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.814624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.814664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.814706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.814753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.814797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.814834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.814866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.814900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.814933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.814965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.815975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.816027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.816079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.816129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.816180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.816229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.816275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.816325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.447 [2024-07-24 22:59:10.816365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.816413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.816460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.816508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.816554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.816586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.816629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.816678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.816726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.816761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.816804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.816849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.816889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.816932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.816977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.817023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.817074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.817120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.817171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.817222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.817565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.817620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.817671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.817725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.817773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.817820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.817874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.817920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.817972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.818022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.818074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.818123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.818175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.818226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.818275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.818322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.818370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.818424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.818476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.818525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.818576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.818625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.818674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.818730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.818786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.818835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.818884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.818938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.818985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.819977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.820010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.820043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.820075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.820125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.820177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.820233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.448 [2024-07-24 22:59:10.820282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.820330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.820384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.820435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.820755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.820798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.820830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.820862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.820896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.820932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.820962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.820993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.821035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.821078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.821121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.821162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.821194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.821244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.821294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.821349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.821404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.821456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.821505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.821556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.821609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.821656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.821701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.821751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.821796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.821845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.821889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.821923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.821958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.822003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.822044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.822082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.822113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.822161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.822211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.822260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.822311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.822362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.822415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.822466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.822517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.822568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.822619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.822666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.822724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.822769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.822816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.822870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.822925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.822975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.823026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.823074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.823124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.823175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.823227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.823282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.823326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.823376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.823429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.823482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.823525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.823571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.823619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.823658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.823972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.824012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.824054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.824086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.824119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.824164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.824217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.824270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.824317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.824364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.824418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.824470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.824523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.824570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.824624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.824676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.824733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.824786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.824835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.824890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.824944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.824992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.825039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.825090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.825142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.825193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.825245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.825300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.825347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.449 [2024-07-24 22:59:10.825397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.825443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.825495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.825545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.825603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.825651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.825696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.825748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.825800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.825856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.825903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 22:59:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:38.450 [2024-07-24 22:59:10.825950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.826000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.826045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.826092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.826136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.826179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.826211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.826250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 22:59:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:38.450 [2024-07-24 22:59:10.826300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.826347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.826389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.826431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.826474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.826524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.826574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.826607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.826646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.826692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.826729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.826764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.826797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.826848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.826885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.827209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.827266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.827320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.827376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.827423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.827473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.827521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.827560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.827595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.827638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.827672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.827708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.827761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.827803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.827834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.827865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.827895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.827926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.827956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.827996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.828992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.829040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.829093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.829147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.829196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.829247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.829301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.829347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.829403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.829458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.829507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.829557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.829608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.829658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.829705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.829762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.829822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.829877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.829929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.829981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.450 [2024-07-24 22:59:10.830032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.830369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.830411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.830453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.830495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.830540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.830581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.830620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.830653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.830694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.830755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.830800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.830846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.830891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.830935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.830978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.831023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.831068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.831112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.831152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.831201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.831254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.831300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.831346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.831396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.831443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.831506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.831557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.831606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.831658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.831704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.831756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.831808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.831855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.831905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.831955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.832004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.832057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.832110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.832164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.832212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.832261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.832315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.832372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.832420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.832466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.832516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.832564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.832612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.832669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.832727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.832775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.832823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.832875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.832920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.832975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.833027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.833074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.833128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.833177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.833225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.833260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.833300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.833343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.833657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.833693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.833730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.833763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.833796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.833828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.833872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.833904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.833934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.833965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.833996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.451 [2024-07-24 22:59:10.834765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.834796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.834826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.834857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.834887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.834918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.834948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.834979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.835010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.835040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.835071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.835114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.835157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.835197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.835239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.835278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.835331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.835388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.835440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.835486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.835539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.835586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.835643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.835698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.835753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.835804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.835857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.835902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.835954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.836006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.836065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.836392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.836443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.836494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.836544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.836593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.836645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.836693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.836743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.836796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.836848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.836907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.836956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.837977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.838010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.838043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.838086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.838134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.838183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.838231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.838286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.838334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.838382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.838431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.838478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.838526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.838579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.838632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.838690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.838742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.838794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.838850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.838901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.838952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.838999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.839049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.839098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.839148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.839198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.839245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.839302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.839351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.839689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.839740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.839787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.452 [2024-07-24 22:59:10.839829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.839861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.839896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.839943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.839993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.840952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.841004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.841053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.841106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.841157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.841203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.841257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.841307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.841354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.841398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.841450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.841502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.841559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.841607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.841654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.841706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.841758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.841801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.841851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.841895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.841953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.842003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.842035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.842067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.842099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.842147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.842190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.842232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.842271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.842318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.842371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.842424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.842481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.842534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.842588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.842929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.842987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.843034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.843088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.843139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.843186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.843239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.843292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.843345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.843396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.843450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.843501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.843548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.843601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.843659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.843704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.843752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.843798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.843832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.843864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.453 [2024-07-24 22:59:10.843909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.843950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.843999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.844977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.845025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.845078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.845131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.845179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.845228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.845279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.845333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.845394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.845443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.845492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.845541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.845590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.845640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.845691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.845746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.845798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.845843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.845891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.846277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.846326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.846370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.846408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.846440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.846473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.846505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.846547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.846587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.846630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.846662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.846695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.846731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.846762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.846793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.846823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.846855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.846888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.846920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.846957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.847004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.847056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.847108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.847157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.847204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.847255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.847309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.847364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.847413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.847458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.847507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.847556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.847605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.847653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.847704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.847756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.847807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.847865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.847909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.847949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.847991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.848036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.848079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.848126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.848168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.848212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.848254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.848286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.848335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.848376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.848417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.848457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.848506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.848545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.848590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.454 [2024-07-24 22:59:10.848635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.848686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.848743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.848790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.848840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.848896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.848947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.848996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.849042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.849376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.849429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.849478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.849530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.849579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.849626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.849678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.849741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.849789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.849841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.849887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.849939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.849997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.850046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.850089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.850136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.850184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.850241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.850293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.850346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.850397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.850454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.850501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.850549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.850599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.850648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.850700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.850759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.850808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.850858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.850905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.850952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.851996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.852040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.852081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.852126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.852169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.852215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.852258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.852297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.852330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.852364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.852635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.852670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.852701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.852736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.852769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.852799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.852830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.852861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.852893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.852924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.852956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.852988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.853020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.853050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.853081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.853112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.853148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.853179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.455 [2024-07-24 22:59:10.853210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.853240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.853280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.853320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.853363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.853403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.853434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.853464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.853495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.853525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.853556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.853595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.853638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.853679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.853723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.853758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.853789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.853820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.853864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.853906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.853949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.854000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.854052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.854116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.854167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.854213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.854263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.854313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.854369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.854422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.854475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.854524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.854574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.854629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.854685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.854737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.854788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.854840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.854890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.854936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.854986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.855037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.855090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.855154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.855203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.855253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.855588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.855633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.855671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.855703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.855752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.855794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.855838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.855883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.855941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.855974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.856019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.856070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.856124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.856173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.856217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.856266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.856323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.856373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.856420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.856471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.856520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.856570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.856624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.856676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.856738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.856793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.856845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.856891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.856940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.856989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.857044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.857099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.857151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.857200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.857252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.857305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.857354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.857406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.857459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.857511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.857562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.857612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.857660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.857712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.857767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.857820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.857873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.857922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.857972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.858022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.858071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.456 [2024-07-24 22:59:10.858115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.858156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.858203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.858244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.858288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.858329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.858374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.858411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.858444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.858493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.858535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.858580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.858628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.858940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.858979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.859969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.860974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.861029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.861077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.861130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.861179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.861230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.861279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.861329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.861658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.861693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.861734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:38.457 [2024-07-24 22:59:10.861781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.861821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.861862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.861907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.861959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.862013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.862060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.862107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.862155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.862210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.862256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.457 [2024-07-24 22:59:10.862305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.862353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.862406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.862459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.862511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.862561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.862614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.862667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.862725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.862781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.862831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.862882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.862933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.862982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.863032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.863083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.863129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.863176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.863226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.863275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.863325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.863378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.863429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.863482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.863530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.863579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.863625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.863678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.863732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.863782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.863832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.863882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.863932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.863979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.864029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.864080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.864132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.864180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.864233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.864287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.864339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.864388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.864434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.864476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.864525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.864573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.864620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.864662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.864703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.864750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.865085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.865134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.865177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.865217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.865250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.865291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.865335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.865376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.865426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.865475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.865520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.458 [2024-07-24 22:59:10.865564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.865602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.865636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.865670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.865705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.865746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.865779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.865809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.865842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.865875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.865905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.865935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.865966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.865996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.866977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.746 [2024-07-24 22:59:10.867026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.867073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.867117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.867156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.867199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.867232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.867270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.867313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.867355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.867399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.867443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.867481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.867520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.867867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.867927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.867981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.868035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.868085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.868131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.868178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.868235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.868279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.868328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.868375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.868427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.868477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.868529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.868576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.868626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.868677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.868731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.868784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.868837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.868888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.868936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.868985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.869038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.869088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.869138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.869187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.869238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.869288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.869337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.869388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.869439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.869486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.869531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.869584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.869635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.869688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.869751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.869807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.869859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.869912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.869962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.870996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.871327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.871370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.871413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.871454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.871501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.871553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.871598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.871642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.871686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.871734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.871768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.871813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.871855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.871901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.871942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.871975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.872007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.872040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.872074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.872108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.747 [2024-07-24 22:59:10.872141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.872979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.873018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.873061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.873102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.873144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.873198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.873252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.873303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.873357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.873409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.873459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.873509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.873562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.873612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.873659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.873708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.873765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.873818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.874147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.874198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.874255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.874308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.874360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.874414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.874464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.874512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.874566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.874616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.874667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.874725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.874777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.874826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.874877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.874926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.874971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.875962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.876006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.876050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.876100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.876153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.876201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.876255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.876302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.876355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.876402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.876454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.876507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.876553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.876604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.876656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.748 [2024-07-24 22:59:10.876719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.876771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.876818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.876864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.876909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.876961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.877016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.877062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.877116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.877165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.877719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.877773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.877815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.877859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.877909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.877959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.877993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.878972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.879981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.880032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.880084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.880131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.880181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.880235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.880285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.880395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.880445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.880498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.880545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.880589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.880637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.880689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.880743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.880797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.880846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.880894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.880945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.880996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.881045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.881099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.881153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.881507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.881556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.881600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.881641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.881679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.881711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.881758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.749 [2024-07-24 22:59:10.881799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.881843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.881887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.881927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.881969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.882010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.882057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.882095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.882140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.882182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.882229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.882274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.882316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.882350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.882391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.882444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.882492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.882544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.882595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.882645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.882696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.882755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.882806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.882856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.882909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.882962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.883017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.883078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.883135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.883190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.883239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.883291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.883342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.883391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.883442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.883495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.883545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.883602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.883652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.883701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.883757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.883803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.883852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.883908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.883963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.884988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.885037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.885099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.885151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.885198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.885247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.885296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.885349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.885394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.885448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.885496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.885546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.885596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.885647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.885700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.885757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.885813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.885873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.885920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.885969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.886003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.886035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.886080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.886124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.886173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.886213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.886258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.886299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.886342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.886375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.750 [2024-07-24 22:59:10.886408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.886441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.886481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.886520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.886561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.886602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.886646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.886703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.886753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.886808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.886860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.886907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.886961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.887009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.887058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.887113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.887160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.887213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.887262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.887318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.887370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.887430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.887481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.887802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.887849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.887899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.887943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.887985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.888949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.889965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.890013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.890063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.890112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.890165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.751 [2024-07-24 22:59:10.890215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.890271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.890323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.890372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.890425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.890476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.890527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.890579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.890630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.890681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.890737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.891072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.891124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.891173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.891220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.891270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.891318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.891374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.891421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.891473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.891517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.891565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.891613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.891657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.891701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.891741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.891776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.891819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.891863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.891911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.891962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.892989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.893039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.893093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.893148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.893193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.893243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.893292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.893341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.893393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.893447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.893499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.893554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.893598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.893649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.893704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.893755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.893804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.893862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.893923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.893969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.894303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.894357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.894392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.894431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.894478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.894518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.894552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.894589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.894634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.894675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.894724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.894770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.894803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.894850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.894900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.894952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.895002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.895049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.895101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.895154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.895204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.895248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.752 [2024-07-24 22:59:10.895297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.895343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.895383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.895430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.895471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.895503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.895535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.895569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.895612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.895657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.895700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.895751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.895803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.895852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.895904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.895954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.896000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.896055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.896107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.896159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.896209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.896263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.896312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.896365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.896412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.896462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.896514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.896568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.896617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.896666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.896711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.896764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.896820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.896869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.896911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.896950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.896984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.897020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.897062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.897110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.897161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.897209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.897533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.897587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.897640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.897689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.897743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.897795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.897842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.897892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.897937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.897985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.898952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.899002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.899049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.899097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.899151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.899200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.899245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.899296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.899344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.899391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.899445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.899496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.899547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.899597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.899648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.899694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.899746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.899802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.899855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.899903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.899949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.900001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.900040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.900074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.900106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.753 [2024-07-24 22:59:10.900152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.900197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.900242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.900284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.900330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.900374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.900419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.900822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.900879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.900932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.900982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.901028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.901079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.901129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.901183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.901231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.901278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.901327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.901380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.901436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.901493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.901545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.901595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.901642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.901692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.901744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.901796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.901854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.901904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.901951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.902995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.903027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.903071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.903114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.903155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.903204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.903236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.903269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.903314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.903361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.903405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.903437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.903490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.903539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.903593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.903639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.903692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.903755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.903807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.903856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.904189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.904238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.904289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.904342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.904393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.904447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.904493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.904539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.904583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.904617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.904648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.904687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.904738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.754 [2024-07-24 22:59:10.904784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.904824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.904873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.904914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.904960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.904992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.905023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.905055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.905086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.905129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.905171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.905210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.905255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.905307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.905354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.905406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.905458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.905514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.905567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.905619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.905666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.905711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.905764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.905814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.905868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.905922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.905970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.906972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.907005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.907370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.907430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.907478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.907529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.907582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.907630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.907685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.907744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.907792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.907838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.907889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.907939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.907988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.908039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.908090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.908141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.908190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.908237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.908291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.908343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.908391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.908442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.908492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.908545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.908597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.908651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.908705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.908759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.908809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.908853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.908902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.908946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.755 [2024-07-24 22:59:10.908986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.909996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.910051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.910096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.910146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.910196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.910247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.910298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.910349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.910671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.910705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.910753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.910797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.910839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.910879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.910922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.910965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.911954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.912000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.912051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.912108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.912169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.912216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.912267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.912318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.912366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.912417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.912472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.912517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.912571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.912619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.912671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.912726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.912779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.756 [2024-07-24 22:59:10.912829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.912878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.912926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.912974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.913019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.913072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.913121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.913164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.913206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.913247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.913290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.913333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:38.757 [2024-07-24 22:59:10.913644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.913688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.913735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.913774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.913815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.913858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.913898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.913939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.913975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.914006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.914056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.914109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.914160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.914206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.914254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.914309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.914358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.914408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.914461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.914508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.914555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.914608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.914664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.914724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.914770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.914822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.914875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.914925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.914977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.915031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.915081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.915128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.915180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.915232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.915289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.915340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.915393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.915441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.915488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.915540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.915586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.915637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.915690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.915745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.915795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.915844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.915895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.915953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.916008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.916056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.916104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.916151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.916196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.916236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.916281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.916324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.916357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.916393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.916435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.916479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.916526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.916578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.916623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.916665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.916995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.917047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.917087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.917119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.917152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.917198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.917238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.917274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.917308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.917340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.917388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.917437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.917487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.917539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.917595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.917651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.917702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.917754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.917807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.917855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.917908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.757 [2024-07-24 22:59:10.917960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.918978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.919025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.919071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.919121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.919172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.919223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.919268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.919318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.919369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.919423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.919471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.919519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.919564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.919617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.919665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.919726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.919779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.919830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.919883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.920242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.920295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.920344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.920401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.920447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.920502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.920549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.920595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.920651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.920698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.920752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.920795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.920846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.920892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.920933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.920964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.921994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.922038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.922087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.922135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.922189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.922237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.922290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.922346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.922397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.922449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.922498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.922546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.922595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.922646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.922683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.922720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.758 [2024-07-24 22:59:10.922761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.922809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.922854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.922895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.922941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.922991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.923026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.923057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.923090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.923448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.923504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.923551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.923599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.923648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.923702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.923757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.923810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.923857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.923900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.923950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.923999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.924051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.924100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.924149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.924205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.924262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.924315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.924369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.924420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.924469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.924516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.924566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.924621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.924670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.924726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.924772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.924804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.924845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.924885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.924926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.924971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.925987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.926034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.926085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.926140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.926195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.926245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.926294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.926345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.926396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.926744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.926797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.926850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.926899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.926952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.927002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.927050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.927099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.927148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.927191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.927233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.927274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.927322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.927362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.927405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.927447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.927481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.927517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.927562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.927604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.927650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.927692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.927741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.927775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.927806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.759 [2024-07-24 22:59:10.927839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.927872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.927913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.927955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.928983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.929018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.929051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.929095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.929140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.929186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.929228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.929259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.929302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.929345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.929385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.929418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.929456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.929506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.929843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.929898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.929947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.929996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.930048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.930098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.930160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.930215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.930263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.930312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.930366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.930417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.930462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.930510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.930554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.930609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.930665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.930722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.930766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.930818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.930859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.930906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.930948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.930980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.931017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.931061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.931104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.931150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.931190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.931236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.931274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.931308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.931349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.931386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.931427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.931466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.931508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.931546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.931587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.931637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.931687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.931743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.931800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.760 [2024-07-24 22:59:10.931851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.931900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.931958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.932007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.932059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.932111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.932163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.932215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.932266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.932319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.932377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.932428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.932479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.932536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.932588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.932639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.932691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.932742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.932794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.932847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.933198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.933255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.933304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.933353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.933402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.933455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.933507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.933557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.933606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.933655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.933702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.933763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.933806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.933849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.933883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.933914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.933955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.933996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.934973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.935970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.936005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.936360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.936416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.936464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.936514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.936561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.936613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.936664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.936718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.936774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.936822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.936874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.936923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.936970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.937026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.761 [2024-07-24 22:59:10.937083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.937131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.937187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.937235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.937285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.937335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.937390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.937440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.937487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.937540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.937592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.937639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.937691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.937742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.937796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.937852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.937907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.937958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.938967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.939024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.939076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.939130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.939188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.939237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.939289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.939343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.939676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.939733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.939789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.939838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.939889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.939939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.939989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.940965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.941012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.941066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.941115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.941162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.941208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.941258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.941308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.941351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.941395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.941448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.941498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.941547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.941596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.941639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.941690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.941746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.941793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.941841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.762 [2024-07-24 22:59:10.941889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.941932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.941984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.942039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.942089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.942139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.942188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.942234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.942275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.942322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.942365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.942400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.942431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.942475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.942523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.942563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.942878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.942912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.942944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.942987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.943967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.944997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.945030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.945076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.945125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.945175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.945222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.945273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.945325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.945371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.945419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.945473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.945525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.945577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.945623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.945670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.946028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.946081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.946129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.946171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.946217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.946255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.946300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.946346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.946379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.946416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.946459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.946498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.946535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.946576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.946617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.946661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.946694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.946729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.763 [2024-07-24 22:59:10.946779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.946830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.946880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.946928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.946981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.947028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.947076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.947123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.947171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.947221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.947269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.947323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.947381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.947435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.947488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.947538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.947587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.947638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.947686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.947739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.947785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.947843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.947887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.947939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.947987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.948037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.948081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.948127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.948166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.948213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.948256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.948299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.948339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.948371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.948416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.948458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.948500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.948539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.948586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.948632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.948681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.948731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.948768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.948801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.948842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.948880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.949228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.949283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.949336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.949385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.949430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.949479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.949536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.949581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.949629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.949680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.949741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.949796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.949847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.949894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.949944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.949996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.950980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.951021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.951072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.951123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.951171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.951219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.951266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.951316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.951367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.951419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.951468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.951516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.951565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.764 [2024-07-24 22:59:10.951616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.951664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.951713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.951764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.951814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.951866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.951918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.951966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.952014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.952063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.952118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.952454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.952504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.952536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.952570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.952612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.952654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.952696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.952747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.952792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.952833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.952877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.952924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.952955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.952995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.953993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.954045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.954098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.954145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.954196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.954250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.954299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.954352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.954403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.954451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.954497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.954543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.954585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.954635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.954677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.954728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.954772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.954816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.954859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.954898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.954929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.954969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.955009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.955050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.955081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.955113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.955152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.955196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.955516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.955567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.955610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.955662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.955712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.955769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.955815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.765 [2024-07-24 22:59:10.955863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.955918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.955969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.956961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.957983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.958031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.958082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.958133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.958185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.958230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.958282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.958335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.958384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.958721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.958765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.958809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.958850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.958894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.958948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.958986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.959984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.960035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.960082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.960131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.960181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.960234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.960282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.960334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.960381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.960431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.766 [2024-07-24 22:59:10.960478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.960523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.960572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.960622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.960665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.960706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.960755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.960797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.960840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.960872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.960920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.960955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.960986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.961025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.961066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.961106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.961150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.961201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.961250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.961309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.961363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.961418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.961469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.961809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.961867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.961916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.961965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.962015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.962066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:38.767 [2024-07-24 22:59:10.962116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.962167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.962219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.962269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.962317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.962369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.962422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.962474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.962526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.962583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.962636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.962692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.962746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.962789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.962835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.962880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.962915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.962952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.962992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.963962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.964020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.964071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.964120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.964176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.964226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.964276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.964329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.964381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.964438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.964487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.964535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.964584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.964632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.964674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.964722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.964767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.965103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.965139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.965171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.965204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.965246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.965288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.965330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.965366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.965398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.965431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.965478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.767 [2024-07-24 22:59:10.965529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.965579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.965629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.965681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.965740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.965797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.965844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.965896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.965942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.965988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.966038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.966084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.966121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.966155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.966196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.966244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.966282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.966324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.966370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.966419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.966466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.966518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.966568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.966622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.966673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.966726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.966777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.966827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.966876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.966923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.966971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.967024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.967075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.967131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.967187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.967236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.967288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.967332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.967377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.967418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.967450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.967492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.967535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.967584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.967628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.967663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.967719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.967771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.967823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.967877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.967928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.967978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.968024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.968356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.968410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.968458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.968507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.968555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.968607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.968656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.968706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.968759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.968813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.968868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.968927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.968980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.969974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.970014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.970064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.970111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.970163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.970216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.970272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.970333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.970386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.970441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.768 [2024-07-24 22:59:10.970496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.970548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.970596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.970645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.970693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.970745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.970793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.970841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.970892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.970948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.971002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.971058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.971108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.971160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.971214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.971261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.971296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.971334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.971665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.971700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.971747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.971790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.971822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.971861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.971905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.971946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.971988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.972024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.972073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.972126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.972181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.972230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.972282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.972335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.972389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.972444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.972495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.972544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.972595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.972647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.972700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.972751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.972803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.972852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.972901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.972952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.973995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.974048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.974102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.974148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.974192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.974236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.974270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.974302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.974345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.974391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.974425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.974469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.974515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.974549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.769 [2024-07-24 22:59:10.974883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.974936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.974988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.975041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.975093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.975147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.975202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.975249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.975300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.975349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.975399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.975453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.975508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.975549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.975595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.975637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.975672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.975703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.975752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.975801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.975844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.975890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.975934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.975967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.976008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.976054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.976089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.976129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.976173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.976216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.976253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.976302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.976352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.976402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.976457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.976520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.976569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.976623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.976675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.976730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.976786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.976837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.976895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.976947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.977002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.977052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.977101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.977158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.977209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.977262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.977326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.977379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.977431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.977478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.977526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.977575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.977624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.977673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.977732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.977783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.977831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.977882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.977935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.978283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.978331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.978374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.978407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.978437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.978484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.978528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.978574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.978612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.978658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.978702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.978750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.978800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.978833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.978871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.978915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.978959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.979004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.979048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.979092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.979138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.979171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.979204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.979248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.979291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.979328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.979360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.979395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.979428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.979459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.979512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.979559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.979608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.979657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.770 [2024-07-24 22:59:10.979705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.979761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.979810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.979857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.979908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.979965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.980994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.981036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.981077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.981421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.981476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.981522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.981571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.981621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.981672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.981723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.981772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.981824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.981873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.981922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.981975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.982995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.983978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.984029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.984080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.984131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.984176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.984227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.984277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.984330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.984696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.984747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.984791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.984834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.984875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.771 [2024-07-24 22:59:10.984918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.984958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.984991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.985953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.986006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.986054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.986104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.986154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.986207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.986257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.986306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.986353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.986403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.986452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.986508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.986563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.986612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.986665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.986712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.986766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.986815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.986870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.986918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.986970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.987026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.987075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.987122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.987168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.987211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.987257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.987303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.987346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.987389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.987431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.987810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.987846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.987889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.987929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.987972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.988015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.988053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.988086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.988126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.988177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.988226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.988278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.988328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.988377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.988427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.988475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.988528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.988575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.988626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.988673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.988723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.988774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.988826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.988879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.988932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.988983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.989034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.989084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.989131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.989184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.989236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.989281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.989338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.989388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.989440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.989487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.989543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.989592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.989646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.772 [2024-07-24 22:59:10.989696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.989756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.989814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.989872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.989925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.989969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.990019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.990067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.990119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.990166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.990221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.990266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.990309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.990357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.990404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.990448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.990482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.990517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.990564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.990615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.990659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.990701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.990752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.990797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.991203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.991252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.991286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.991317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.991355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.991394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.991430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.991465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.991513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.991562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.991612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.991664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.991719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.991769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.991818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.991870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.991923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.991980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 true 00:13:38.773 [2024-07-24 22:59:10.992218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.992988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.993036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.993093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.993144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.993191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.993241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.993293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.993343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.993401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.993452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.993501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.993558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.993609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.993659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.993708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.993765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.993812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.993865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.993917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.993966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.994015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.994066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.994117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.994458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.994494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.994534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.994581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.994625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.994669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.994703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.994740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.773 [2024-07-24 22:59:10.994785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.994827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.994871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.994915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.994960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.995011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.995061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.995112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.995162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.995211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.995258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.995316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.995374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.995422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.995477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.995531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.995582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.995636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.995695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.995754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.995804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.995855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.995904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.995955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.996006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.996052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.996104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.996155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.996210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.996269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.996321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.996377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.996426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.996480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.996530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.996580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.996625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.996671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.996722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.996767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.996814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.996857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.996891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.996926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.996970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.997016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.997059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.997108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.997159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.997200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.997247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.997286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.997325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.997368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.997411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.997757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.997806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.997851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.997891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.997924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.997960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.998002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.998034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.998070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.998112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.998165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.998215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.998262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.998315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.998367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.998416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.998476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.998528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.998576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.998610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.998647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.998694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.998745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.998795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.998828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.774 [2024-07-24 22:59:10.998869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.998916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.998956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.998989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.999042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.999097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.999147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.999192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.999239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.999290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.999339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.999396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.999445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.999499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.999552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.999602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.999653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.999704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.999757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.999818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.999879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.999927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:10.999981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.000032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.000083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.000134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.000187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.000231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.000283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.000330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.000380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.000429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.000470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.000514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.000556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.000589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.000627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.000674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.000724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.001079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.001131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.001190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.001245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.001293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.001341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.001391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.001443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.001496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.001546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.001599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.001647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.001695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.001747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.001798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.001846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.001898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.001950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.002007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.002057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.002110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.002162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.002210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.002257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.002314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.002368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.002425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.002473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.002524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.002572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.002622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.002675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.002731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.002785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.002834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.002884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.002936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.002980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.775 [2024-07-24 22:59:11.003916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.003949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.003993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.004027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.004378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.004432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.004475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.004509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.004557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.004602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.004652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.004692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.004730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.004766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.004799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.004831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.004862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.004893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.004924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.004954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.004994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.005950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.006002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.006050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.006102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.006152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.006199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.006248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.006298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.006355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.006409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.006458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.006511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.006563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.006612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.006665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.006719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.006772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.006822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.006869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.006916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.006963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.007011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.007051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.007096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.007145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.007453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.007494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.007530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.007568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.007610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.007659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.007713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.007768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.007818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.007866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.007915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.007964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.008020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.008064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.008112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.008160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.008211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.008261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.008312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.008367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.008417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.008465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.008517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.008572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.008619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.008668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.008724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.008773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.008824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.008872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.776 [2024-07-24 22:59:11.008924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.008976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.009027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.009093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.009135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.009184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.009232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.009281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.009335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.009382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.009431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.009481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.009533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.009583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.009636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.009684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.009734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.009781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.009824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.009865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.009912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.009962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.010010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.010051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.010093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.010127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.010172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.010215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.010259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.010301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.010345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.010388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.010431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.010793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.010832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.010865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.010898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.010928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.010961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.010992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.011965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.012007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.012054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.012096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.012134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.012172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.012216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.012265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.012314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.012361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.012413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.012469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.012518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.012565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.012613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.012661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.012723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.012778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.012828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.012879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.012924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.012982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.013028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.013078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.013126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.013180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.013235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.777 [2024-07-24 22:59:11.013285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.013617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.013667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.013719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:38.778 [2024-07-24 22:59:11.013764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.013809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.013855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.013900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.013944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.013978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.014015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.014055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.014099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.014140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.014185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.014226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.014266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.014315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.014367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.014424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.014472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.014520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.014571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.014619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.014672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.014726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.014776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.014826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.014881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.014937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.014989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.015040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.015093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.015145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.015194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.015249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.015306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.015358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 22:59:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:38.778 [2024-07-24 22:59:11.015404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.015460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.015508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.015557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.015610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.015661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.015719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.015769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 22:59:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.778 [2024-07-24 22:59:11.015815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.015864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.015915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.015963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.016012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.016063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.016112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.016172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.016222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.016272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.016322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.016371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.016423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.016468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.016509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.016551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.016594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.016637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.016972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.778 [2024-07-24 22:59:11.017779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.017810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.017840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.017872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.017904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.017935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.017966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.017997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.018976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.019009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.019053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.019092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.019133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.019176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.019222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.019259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.019312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.019367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.019421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.019471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.019523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.019859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.019914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.019969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.020024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.020072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.020123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.020172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.020223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.020274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.020321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.020371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.020419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.020471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.020527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.020574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.020627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.020678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.020734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.020779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.020827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.020874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.020922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.020971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.021022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.021073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.021123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.021173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.021224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.021288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.021342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.021389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.021436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.021485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.021535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.021588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.021642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.021690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.021742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.021791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.021842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.021897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.021952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.022000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.022048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.779 [2024-07-24 22:59:11.022097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.022148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.022201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.022257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.022306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.022350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.022402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.022455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.022503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.022546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.022589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.022635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.022667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.022713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.022762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.022811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.022855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.022902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.022948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.023286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.023333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.023380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.023421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.023456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.023494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.023535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.023582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.023628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.023661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.023700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.023737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.023772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.023805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.023837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.023868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.023900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.023931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.023962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.023994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.024957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.025005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.025059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.025110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.025163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.025208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.025253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.025293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.025337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.025370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.025412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.025451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.025493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.025534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.025576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.025617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.025666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.025723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.025769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.026109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.026167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.026215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.026266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.026317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.026370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.026417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.026478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.026540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.026587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.026635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.026684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.026738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.026786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.780 [2024-07-24 22:59:11.026839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.026892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.026942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.026991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.027045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.027104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.027146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.027196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.027244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.027295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.027345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.027400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.027452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.027506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.027559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.027615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.027669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.027722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.027773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.027829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.027882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.027930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.027977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.028028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.028078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.028130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.028185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.028235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.028284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.028335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.028385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.028437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.028490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.028538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.028585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.028634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.028688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.028736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.028785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.028832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.028877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.028925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.028973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.029015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.029056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.029089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.029132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.029177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.029219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.029531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.029584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.029629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.029671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.029719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.029770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.029818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.029861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.029894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.029939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.029979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.781 [2024-07-24 22:59:11.030957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.030993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.031986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.032031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.032375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.032427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.032480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.032529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.032583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.032630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.032680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.032736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.032790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.032842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.032889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.032937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.032993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.033050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.033097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.033151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.033204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.033251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.033302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.033349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.033396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.033445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.033499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.033547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.033599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.033646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.033692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.033746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.033804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.033858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.033905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.033953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.034001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.034051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.034110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.034159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.034207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.034258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.034310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.034365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.034416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.034464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.034511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.034566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.034611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.034661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.034719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.034767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.034816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.034872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.034921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.034972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.035023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.035071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.035119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.035166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.035214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.035258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.035300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.035347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.035402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.035453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.035492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.035828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.035871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.035909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.035952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.036000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.036043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.036086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.036128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.036169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.782 [2024-07-24 22:59:11.036211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.036254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.036286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.036327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.036371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.036411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.036461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.036495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.036532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.036568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.036603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.036635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.036682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.036720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.036751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.036783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.036814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.036846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.036877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.036909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.036940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.036971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.037992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.038045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.038095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.038144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.038188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.038233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.038273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.038321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.038697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.038758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.038810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.038857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.038907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.038956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.039005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.039057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.039112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.039162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.039210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.039261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.039307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.039356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.039405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.039456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.039521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.039572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.783 [2024-07-24 22:59:11.039624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.039676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.039730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.039786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.039839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.039886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.039940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.039993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.040042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.040095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.040150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.040199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.040248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.040302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.040352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.040405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.040451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.040499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.040544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.040594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.040644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.040692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.040745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.040793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.040844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.040901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.040953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.041006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.041056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.041110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.041161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.041207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.041257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.041305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.041358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.041407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.041460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.041511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.041560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.041602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.041650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.041694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.041747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.041791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.041835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.042164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.042208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.042250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.042292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.042332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.042368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.042411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.042454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.042498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.042540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.042583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.042627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.042672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.042719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.042753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.042797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.042839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.042881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.042925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.042959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.042994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.043962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.044014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.044054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.044093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.784 [2024-07-24 22:59:11.044138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.044178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.044221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.044264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.044315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.044368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.044414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.044461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.044506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.044554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.044601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.044651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.045002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.045053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.045106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.045153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.045202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.045248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.045297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.045347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.045400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.045453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.045509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.045560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.045609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.045658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.045707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.045761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.045814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.045865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.045910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.045959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.046010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.046059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.046115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.046160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.046215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.046267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.046319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.046368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.046416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.046468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.046515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.046564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.046618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.046669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.046723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.046774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.046826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.046879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.046935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.046985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.047035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.047088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.047134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.047185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.047239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.047286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.047335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.047380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.047423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.047465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.047508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.047550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.047595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.047635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.047669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.047712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.047763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.047818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.047864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.047916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.047965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.048008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.048050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.048374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.048420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.048453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.048501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.048546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.048588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.048630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.048666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.048705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.048745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.048778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.048810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.048859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.048906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.048957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.049009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.049059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.049110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.049160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.049211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.785 [2024-07-24 22:59:11.049270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.049307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.049340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.049384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.049425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.049472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.049515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.049551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.049587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.049619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.049652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.049682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.049733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.049775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.049820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.049860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.049900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.049945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.049978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.050028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.050078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.050131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.050178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.050227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.050278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.050330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.050380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.050428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.050483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.050528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.050578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.050623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.050671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.050728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.050775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.050829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.050877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.050927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.050976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.051028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.051081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.051130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.051184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.051236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.051570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.051620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.051668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.051721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.051769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.051818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.051860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.051906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.051952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.051997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.052975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.053024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.053077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.053127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.053178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.053228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.053277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.053325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.053375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.053429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.786 [2024-07-24 22:59:11.053480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.053526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.053574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.053625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.053672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.053728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.053777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.053828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.053877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.053925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.053974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.054026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.054082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.054135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.054186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.054242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.054290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.054342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.054393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.054447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.054496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.054820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.054864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.054912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.054956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.054997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.055989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.056994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.057044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.057092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.057143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.057194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.057255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.057302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.057350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.057394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.057441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.057489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.057541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.057589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.057645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.057977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.058030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.058081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.058128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.058183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.058234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.058281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.058334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.787 [2024-07-24 22:59:11.058383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.058433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.058483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.058531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.058584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.058639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.058683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.058738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.058790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.058831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.058880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.058913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.058958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.058999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.059956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.060005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.060056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.060104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.060151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.060205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.060255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.060306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.060356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.060406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.060458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.060504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.060559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.060614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.060668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.060719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.060762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.060800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.060833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.060882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.061212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.061255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.061303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.061337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.061375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.061414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.061457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.061503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.061550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.061604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.061656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.061703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.061760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.061811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.061869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.061916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.061969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.788 [2024-07-24 22:59:11.062017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.062068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.062118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.062169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.062220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.062274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.062324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.062377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.062427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.062477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.062521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.062564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.062609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.062643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.062679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.062736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.062781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.062822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.062868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.062912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.062944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.062977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.063978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.064019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.064074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.064127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.064460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.064513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:38.789 [2024-07-24 22:59:11.064568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.064625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.064671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.064723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.064774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.064826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.064874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.064925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.064979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.065996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.066043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.066095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.066147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.066196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.066250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.066297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.066346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.066392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.066443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.066493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.066542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.066596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.066648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.066694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.066755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.789 [2024-07-24 22:59:11.066805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.066851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.066899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.066945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.066992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.067039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.067086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.067138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.067189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.067236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.067284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.067336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.067385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.067442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.067785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.067830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.067873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.067917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.067953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.067988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.068998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.069047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.069100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.069157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.069214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.069262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.069315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.069363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.069413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.069467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.069520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.069571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.069617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.069669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.069722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.069768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.069825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.069859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.069889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.069930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.069976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.070018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.070067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.070109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.070155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.070200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.070233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.070270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.070316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.070350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.070390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.070434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.070478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.070521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.070562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.070902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.070955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.071004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.071059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.071109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.071160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.071221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.071277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.071326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.071375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.071426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.071475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.071523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.071574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.071631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.071683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.071739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.071788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.071837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.071889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.071941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.790 [2024-07-24 22:59:11.071990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.072977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.073021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.073067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.073110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.073144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.073178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.073210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.073256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.073293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.073324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.073377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.073422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.073476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.073524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.073570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.073630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.073689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.073744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.073791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.073847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.074994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.075024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.075056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.075100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.075151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.075199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.075253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.075307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.075355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.075408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.075457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.075508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.075555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.075608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.075654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.075703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.075759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.075805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.075856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.075911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.075962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.076016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.076069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.076116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.076164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.076207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.076255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.076296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.076339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.076384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.076419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.076460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.076504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.076544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.076586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.076628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.791 [2024-07-24 22:59:11.076672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.076712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.076763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.076819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.076865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.076913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.076963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.077290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.077346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.077397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.077447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.077502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.077551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.077601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.077651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.077701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.077761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.077806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.077857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.077909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.077959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.078008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.078060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.078117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.078169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.078215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.078263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.078315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.078369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.078423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.078468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.078515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.078566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.078616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.078666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.078712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.078762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.078813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.078867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.078918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.078970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.079991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.080029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.080076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.080129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.080170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.080212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.080255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.080557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.080591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.080621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.080652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.080682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.792 [2024-07-24 22:59:11.080712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.080747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.080778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.080810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.080840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.080871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.080903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.080934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.080964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.080997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.081962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.082016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.082068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.082127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.082176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.082224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.082277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.082329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.082382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.082442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.082494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.082548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.082595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.082648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.082699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.082757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.082804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.082860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.082912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.082964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.083014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.083069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.083120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.083175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.083506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.083558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.083609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.083656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.083708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.083762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.083807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.083852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.083898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.083942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.083985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.084018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.084057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.084105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.084154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.793 [2024-07-24 22:59:11.084201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.084246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.084287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.084327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.084370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.084410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.084443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.084488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.084538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.084586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.084636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.084684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.084740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.084794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.084846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.084895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.084947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.084997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.085050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.085100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.085153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.085208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.085259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.085307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.085355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.085407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.085457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.085510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.085562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.085616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.085673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.085722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.085767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.085827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.085869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.085914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.085968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.086013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.086063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.086113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.086169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.086215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.086258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.086301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.086342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.086385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.086425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.086473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.086799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.086844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.086885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.086930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.086970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.794 [2024-07-24 22:59:11.087981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.088972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.089020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.089066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.089115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.089449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.089496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.089537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.089584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.089626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.089659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.089700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.089747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.089787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.089826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.089866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.089914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.089961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.090014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.090061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.090109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.090161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.090210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.090263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.090314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.090358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.090408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.090458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.090510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.090558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.090613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.090663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.090718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.090771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.090824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.090879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.090929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.090978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.091028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.091075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.091124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.091170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.091219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.091264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.091313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.091364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.091414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.091464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.091513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.091560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.091609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.091664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.091710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.091759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.091812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.091864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.091928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.091980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.092028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.092082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.092129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.092179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.092234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.092284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.092334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.092383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.092432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.092476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.092799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.092842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.092886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.092934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.795 [2024-07-24 22:59:11.092980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.093972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.094985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.095035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.095083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.095123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.095165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.095209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.095242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.095286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.095657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.095711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.095763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.095810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.095862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.095909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.095961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.096008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.096058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.096108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.096164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.096214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.096261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.096309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.096360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.096412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.096464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.096518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.096567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.096621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.096671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.096722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.096778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.096835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.096892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.096942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.096991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.097042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.097091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.097141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.097191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.097242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.097294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.796 [2024-07-24 22:59:11.097348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.097395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.097445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.097491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.097538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.097592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.097641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.097696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.097743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.097792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.097840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.097888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.097941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.097992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.098042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.098090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.098141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.098193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.098246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.098296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.098344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.098392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.098444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.098496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.098552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.098602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.098649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.098701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.098751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.098791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.099107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.099155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.099204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.099253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.099296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.099330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.099368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.099412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.099464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.099506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.099548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.099591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.099636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.099686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.099726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.099762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.099813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.099857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.099900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.099941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.099983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.100990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.101022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.101056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.101087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.101117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.101161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.101200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.101243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.101288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.101331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.101367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.101406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.101452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.101501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.101550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.101598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.101650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.101702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.101756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.797 [2024-07-24 22:59:11.101810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.101854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.102192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.102246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.102297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.102345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.102396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.102450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.102500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.102556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.102603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.102650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.102698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.102750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.102799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.102851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.102904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.102950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.102995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.103967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.104017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.104069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.104116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.104171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.104227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.104279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.104325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.104376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.104434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.104488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.104537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.104586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.104638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.104690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.104738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.104790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.104845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.104902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.104951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.104997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.105054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.105108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.105160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.105501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.105561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.798 [2024-07-24 22:59:11.105613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.105662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.105710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.105763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.105815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.105864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.105912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.105952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.106952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.107008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.107054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.107107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.107152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.107201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.107255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.107306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.107353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.107403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.107454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.107506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.107556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.107612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.107663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.107698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.107734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.107784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.107834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.107881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.107933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.107977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.108009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.108043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.108078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.108122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.108159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.108192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.108237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.108280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.108322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.108359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.108693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.108749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.108798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.108844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.108894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.108945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.108996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.109044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.109093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.109146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.109195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.109242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.109290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.109338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.109391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.109439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.109490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.109539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.109588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.109638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.109692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.109741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.109788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.109832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.109872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.109918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.109971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.110020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.110069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.110102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.110138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.110187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.110232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.799 [2024-07-24 22:59:11.110274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.110313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.110353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.110387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.110419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.110462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.110500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.110544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.110584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.110618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.110653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.110713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.110766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.110815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.110864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.110918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.110972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.111020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.111070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.111119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.111169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.111220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.111271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.111327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.111380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.111430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.111478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.111528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.111582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.111636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.111995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.112979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.113011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.113044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.113093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.113135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.113173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.113205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.113240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.113289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.113340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.113387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.113438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.113489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.113539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.113592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.113646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.113705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.113762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.113812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.113865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.113914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.113962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.114019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.114069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.114115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.114165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.114216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.114264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.114317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.114367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.114420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.114465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.114515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.114566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.114614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.114646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.114691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.114736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.114782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.114828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.800 [2024-07-24 22:59:11.114872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.114916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.115228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.115262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 Message suppressed 999 times: [2024-07-24 22:59:11.115302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 Read completed with error (sct=0, sc=15) 00:13:38.801 [2024-07-24 22:59:11.115349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.115391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.115432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.115470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.115504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.115546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.115592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.115642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.115691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.115747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.115803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.115858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.115913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.115970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.116971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.117023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.117071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.117122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.117170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.117220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.117269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.117316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.117368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.117417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.117468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.117519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.117571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.117618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.117671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.117726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.117777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.117826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.117872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.117916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.117962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.118003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.118045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.118078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.118128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.118499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.118552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.118600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.118651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.118708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.118768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.118816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.118865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.118914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.118966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.119018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.119068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.119117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.119166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.119214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.119265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.119317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.119380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.119427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.119480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.119528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.119586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.119641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.119692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.119749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.119796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.119839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.119884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.119929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.119971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.801 [2024-07-24 22:59:11.120004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.120987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.121031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.121074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.121122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.121166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.121200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.121236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.121279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.121326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.121360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.121393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.121438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.121767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.121817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.121872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.121926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.121974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.122999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.123045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.123098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.123153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.123209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.123255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.123303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.123354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.123402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.123459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.123509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.123564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.123625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.123679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.123729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.123776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.123825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.123878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.123933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.123993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.124043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.124093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.124139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.124189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.124238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.124294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.124343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.124391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.124442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.124494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.124544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.124596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.124644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.802 [2024-07-24 22:59:11.124702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.124756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.124810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.125174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.125222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.125272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.125318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.125367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.125414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.125447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.125483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.125528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.125578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.125626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.125671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.125718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.125771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.125820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.125863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.125895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.125940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.125982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.126027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.126073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.126118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.126169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.126209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.126242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.126275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.126320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.126354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.126389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.126441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.126492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.126545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.126601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.126650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.126703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.126761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.126814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.126866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.126919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.126972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.127971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.128313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.128365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.128415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.128471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.128528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.128582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.803 [2024-07-24 22:59:11.128630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.128688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.128744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.128792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.128842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.128889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.128947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.128997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.129973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.130026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.130076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.130126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.130179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.130238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.130294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.130343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.130395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.130448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.130502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.130558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.130612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.130668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.130720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.130772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.130826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.130880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.130929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.130982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.131047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.131095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.131146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.131200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.131250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.131301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.131352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.131405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.131738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.131793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.131844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.131895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.131947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.131995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.132960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.133001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.133043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.133076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.133108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.133151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.804 [2024-07-24 22:59:11.133189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.133224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.133257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.133303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.133353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.133400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.133453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.133504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.133557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.133608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.133658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.133711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.133764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.133816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.133866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.133914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.133947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.133986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.134028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.134076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.134121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.134164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.134205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.134239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.134271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.134304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.134350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.134388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.134431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.134471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.134513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.134852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.134906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.134956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.135007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.135057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.135112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.135165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.135214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.135264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.135312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.135367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.135412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.135465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.135515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.135565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.135618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.135667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.135722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.135773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.135829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.135881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.135930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.135978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.136029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.136082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.136137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.136187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.136239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.136288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.136332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.136384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.136426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.136470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.136512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.136556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.136609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.136659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.136694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.136739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.136789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.136835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.136877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.136918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.136961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.137007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.137053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.137085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.137120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.137163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.137196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.137237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.137276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.137309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.137356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.137405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.137453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.137501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.137554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.137600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.137646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.137697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.137749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.805 [2024-07-24 22:59:11.137799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.138147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.138201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.138251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.138302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.138357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.138411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.138459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.138494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.138526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.138574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.138613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.138662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.138707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.138765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.138811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.138861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.138906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.138941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.138984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.139020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.139055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.139099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.139145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.139188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.139230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.139279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.139335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.139386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.139439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.139488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.139535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.139585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.139639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.139691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.139748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.139799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.139851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.139899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.139954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.140994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.141041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.141367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.141417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.141468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.141521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.141581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.141630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.141685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.141739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.141787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.141835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.141890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.141938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.141987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.142039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.142092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.142141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.142195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.142245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.142303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.142353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.142404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.142452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.142491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.142523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.142571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.142614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.806 [2024-07-24 22:59:11.142656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.142698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.142745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.142786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.142831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.142876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.142910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.142952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.142994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.143041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.143089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.143122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.143156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.143199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.143243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.143282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.143317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.143363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.143415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.143468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.143522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.143578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.143635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.143686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.143740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.143791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.143841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.143890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.143940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.143997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.144047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.144094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.144142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.144190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.144240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.144291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.144345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.144684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.144737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.144786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.144833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.144881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.144926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.144959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.144993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.145970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.807 [2024-07-24 22:59:11.146956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.808 [2024-07-24 22:59:11.146999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.808 [2024-07-24 22:59:11.147040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.808 [2024-07-24 22:59:11.147080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.808 [2024-07-24 22:59:11.147124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.808 [2024-07-24 22:59:11.147175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.808 [2024-07-24 22:59:11.147228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.808 [2024-07-24 22:59:11.147278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.808 [2024-07-24 22:59:11.147331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.808 [2024-07-24 22:59:11.147379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.808 [2024-07-24 22:59:11.147430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.808 [2024-07-24 22:59:11.147479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.808 [2024-07-24 22:59:11.147531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.808 [2024-07-24 22:59:11.147879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.808 [2024-07-24 22:59:11.147933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.808 [2024-07-24 22:59:11.147981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.808 [2024-07-24 22:59:11.148029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.808 [2024-07-24 22:59:11.148075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:38.808 [2024-07-24 22:59:11.148119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.148161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.148202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.148235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.148277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.148318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.148362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.148408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.148453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.148487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.148520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.148569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.148618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.148665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.148727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.148782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.148835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.148892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.148945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.148992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.149983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.150031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.150077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.150126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.150176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.150232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.150291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.150338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.150386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.150436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.150488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.150537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.150591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.150647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.150703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.150756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.150804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.151130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.151175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.151217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.151258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.151307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.151352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.151387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.151431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.151470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.151505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.151543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.151587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.151630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.151666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.151719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.151771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.151828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.151871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.151917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.151965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.152014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.152063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.152117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.152169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.152219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.152280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.152327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.152379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.152429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.152477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.152528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.152576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.152621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.152670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.152721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.089 [2024-07-24 22:59:11.152775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.152828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.152883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.152939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.152991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.153040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.153090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.153139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.153187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.153242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.153299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.153355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.153413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.153455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.153492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.153526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.153574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.153616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.153662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.153707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.153756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.153806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.153850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.153898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.153933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.153975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.154019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.154061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.154112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.154478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.154515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.154567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.154618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.154669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.154722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.154775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.154825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.154878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.154933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.154986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.155033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.155086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.155138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.155191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.155241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.155288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.155341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.155393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.155450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.155503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.155555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.155607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.155658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.155711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.155757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.155790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.155843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.155889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.155933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.155980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.156977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.157029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.157081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.157137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.157189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.157242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.157290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.157343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.157394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.157448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.157505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.157853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.157908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.157964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.158021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.090 [2024-07-24 22:59:11.158071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.158120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.158169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.158223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.158270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.158315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.158362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.158414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.158465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.158519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.158563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.158598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.158629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.158672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.158724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.158769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.158819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.158867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.158915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.158960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.159987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.160040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.160078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.160123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.160168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.160220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.160254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.160286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.160318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.160363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.160405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.160448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.160496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.160545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.160594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.160649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.160700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.160755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.161097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.161150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.161200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.161246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.161297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.161348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.161400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.161462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.161507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.161559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.161607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.161658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.161706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.161757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.161801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.161833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.161875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.161921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.161973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.162015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.162061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.162104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.162147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.162193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.162228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.162266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.162313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.162351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.162388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.162432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.162472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.162507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.162556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.162607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.162666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.162721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.162770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.162819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.091 [2024-07-24 22:59:11.162869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.162918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.162973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.163024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.163070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.163118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.163170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.163217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.163266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.163318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.163368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.163416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.163466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.163513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.163562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.163609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.163660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.163713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.163771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.163826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.163881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.163930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.163985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.164037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.164091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.164443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.164482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.164528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.164577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.164623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.164666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.164712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:39.092 [2024-07-24 22:59:11.164759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.164806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.164855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.164889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.164929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.164973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.165969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.166990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.167047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.167102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.167153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.167203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.167256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.167306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.167650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.167702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.167761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.167816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.167865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.092 [2024-07-24 22:59:11.167915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.167962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.168959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.169002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.169042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.169076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.169125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.169178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.169229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.169277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.169330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.169382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.169438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.169487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.169536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.169585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.169635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.169682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.169732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.169794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.169845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.169898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.169946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.169992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.170044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.170106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.170152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.170200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.170248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.170298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.170349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.170394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.170444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.170490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.170543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.170580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.170945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.170983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.171023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.171056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.171094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.171127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.171178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.171223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.171255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.171294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.171339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.171384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.171418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.171466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.171517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.171565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.171618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.171676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.171725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.171775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.093 [2024-07-24 22:59:11.171828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.171876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.171926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.171976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.172024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.172059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.172091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.172125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.172163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.172208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.172259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.172310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.172363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.172411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.172462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.172513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.172571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.172627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.172687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.172740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.172791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.172837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.172879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.172929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.172962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.173003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.173048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.173089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.173133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.173175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.173220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.173257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.173304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.173348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.173403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.173450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.173502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.173548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.173600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.173645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.173694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.173747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.173796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.173845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.094 22:59:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.094 [2024-07-24 22:59:11.370880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.370951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.371001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.371047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.371092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.371144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.371191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.371236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.371285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.371331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.371382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.371430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.371480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.371526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.371588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.371637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.371684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.371740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.371785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.371838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.371887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.371939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.371983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.372968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.373010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.373055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.373092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.094 [2024-07-24 22:59:11.373136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.373177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.373208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.373239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.373271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.373303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.373334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.373364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.373395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.373425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.373456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.373484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.373514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.373543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.373838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.373872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.373902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.373931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.373971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.374966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.375015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.375079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.375127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.375174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.375223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.375274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.375320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.375366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.375419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.375475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.375519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.375565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.375619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.375666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.375712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.375759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.375800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.375843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.375889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.375930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.375973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.376019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.376063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.376108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.376140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.376181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.376225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.376271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.376322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.376372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.376416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.376763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.376817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.376863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:39.095 [2024-07-24 22:59:11.376909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.376963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.377019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.377068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.377114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.377164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.377211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.377252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.377300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.377349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.377394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.377444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.377489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.377533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.095 [2024-07-24 22:59:11.377590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.377636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.377683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.377731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.377778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.377823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.377873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.377922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.377967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.378017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.378069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.378118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.378166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.378216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.378276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.378324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.378370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.378421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.378472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.378515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.378566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.378620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.378666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.378712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.378760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.378803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.378848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.378887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.378928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.378971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.379012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.379056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.379101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.379141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.379172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.379207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.379254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.379294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.379332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.379380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.379424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.379470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.379517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.379557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.379594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.379632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.379956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.379989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.380978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.381012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.381041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.381072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.381101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.381130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.381159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.381200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.381241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.381282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.381325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.381365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.381406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.381451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.381497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.381545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.381593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.381647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.381694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.096 [2024-07-24 22:59:11.381743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.381794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.381842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.381884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.381935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.381983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.382029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.382082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.382138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.382184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.382232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.382285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.382621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.382673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.382723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.382766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.382816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.382863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.382906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.382949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.382992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.383951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.384001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.384047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.384097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.384148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.384198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.384244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.384292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.384335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.384380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.384428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.384485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.384538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.384584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.384633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.384685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.384739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.384786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.384838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.384887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.384939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.384989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.385042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.385090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.385135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.385187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.385231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.385278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.385330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.385374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.385424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.385475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.385516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.385858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.385901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.385941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.385984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.386016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.386051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.386095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.386134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.386182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.386224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.386273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.386321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.386362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.386404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.386436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.386473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.386515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.386556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.386594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.386626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.097 [2024-07-24 22:59:11.386658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.386690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.386723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.386758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.386788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.386818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.386848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.386878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.386908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.386937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.386967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.386997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.387976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.388030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.388080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.388123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.388172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.388213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.388249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.388656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.388717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.388764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.388816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.388869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.388926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.388978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.389026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.389073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.389121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.389180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.389229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.389280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.389335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.389386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.389434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.389481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.389529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.389583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.389634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.389694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.389747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.389795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.389846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.389899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.389951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.390002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.390055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.390105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.390157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.390211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.390265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.390319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.390367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.390418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.098 [2024-07-24 22:59:11.390472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.390523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.390575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.390628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.390682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.390735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.390782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.390832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.390884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.390932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.390981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.391034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.391083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.391133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.391186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.391239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.391290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.391338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.391386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.391439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.391492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.391549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.391597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.391647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.391696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.391747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.391791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.391836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.392148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.392194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.392241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.392284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.392339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.392389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.392423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.392455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.392498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.392547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.392593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.392636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.392678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.392734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.392775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.392821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.392855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.392897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.392938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.392989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.393986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.394028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.394077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.394121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.394155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.394185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.394222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.394255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.394296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.394336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.394374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.394414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.394457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.394501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.394541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.394593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.394643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.394691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.394747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.394798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.394850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.394904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.395242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.395295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.395343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.395394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.099 [2024-07-24 22:59:11.395447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.395503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.395553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.395604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.395658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.395711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.395770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.395819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.395868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.395920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.395971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.396960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.397969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.398017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.398065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.398139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.398185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.398513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.398566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.398609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.398651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.398683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.398730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.398772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.398811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.398861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.398907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.398954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.398999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.399977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.400033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.400083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.400134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.400187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.100 [2024-07-24 22:59:11.400234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.400284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.400339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.400389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.400444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.400496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 22:59:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:39.101 [2024-07-24 22:59:11.400549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.400597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.400653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.400703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.400758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.400809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.400858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 22:59:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:39.101 [2024-07-24 22:59:11.400911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.400964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.401008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.401049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.401103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.401146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.401193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.401235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.401280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.401326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.401360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.401400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.401760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.401807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.401849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.401896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.401950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.402007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.402055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.402112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.402163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.402215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.402261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.402317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.402366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.402414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.402468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.402519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.402570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.402625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.402679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.402739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.402790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.402841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.402892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.402941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.402994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.403051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.403101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.403153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.403199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.403248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.403297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.403351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.403404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.403455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.403503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.403554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.403603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.403647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.403689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.403735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.403778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.403827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.403875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.403926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.403959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.404000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.404051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.404097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.404140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.404192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.404232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.404282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.404324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.404358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.404397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.404437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.404478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.404513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.404547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.404591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.404624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.404665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.404719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.405052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.405105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.405148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.405201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.405233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.405264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.405298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.405329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.101 [2024-07-24 22:59:11.405360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.405390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.405421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.405456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.405499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.405541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.405582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.405625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.405657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.405687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.405722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.405758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.405791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.405824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.405876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.405921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.405970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.406027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.406081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.406128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.406176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.406223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.406277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.406332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.406372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.406416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.406460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.406502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.406539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.406585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.406630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.406672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.406720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.406771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.406819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.406871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.406919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.406965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.407013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.407063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.407115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.407165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.407216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.407268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.407317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.407367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.407418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.407472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.407523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.407572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.407624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.407673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.407726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.407779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.407836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.407887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.408218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.408268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.408311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.408351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.408395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.408445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.408496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.408546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.408597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.408641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.408675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.408720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.408761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.408803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.408848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.408894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.408937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.408973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.409005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.409038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.409090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.409142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.409191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.102 [2024-07-24 22:59:11.409241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.409293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.409341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.409394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.409442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.409488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.409535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.409588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.409641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.409690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.409740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.409787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.409836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.409893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.409941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.409991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.410036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.410092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.410146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.410197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.410252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.410301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.410351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.410410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.410460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.410512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.410558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.410603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.410647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.410691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.410743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.410797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.410830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.410868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.410910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.410960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.411003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.411052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.411097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.411140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.411512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.411558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.411616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.411667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.411719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.411771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.411822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.411870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.411919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.411960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.411992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.412966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.413987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.414036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.103 [2024-07-24 22:59:11.414083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.414133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.414185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.414241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.414293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.414342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.414389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.414735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.414787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.414840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.414894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.414948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.415000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.415050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.415101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.415149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.415200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.415253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.415301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.415357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.415406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.415457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.415511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.415558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.415605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.415653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.415701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.415764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.415816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.415861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.415912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.415967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.416962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.417007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.417049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.417084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.417116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.417164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.417201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.417237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.417283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.417316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.417348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.417379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.417427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.417479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.417535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.417585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.417897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.417941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.417984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.418950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.419001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.104 [2024-07-24 22:59:11.419052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.419106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.419154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.419198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.419240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.419284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.419317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.419361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.419405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.419448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.419488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.419531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.419580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.419636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.419692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.419746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.419803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.419853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.419906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.419960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.420013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.420063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.420115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.420163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.420212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.420265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.420316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.420369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.420418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.420465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.420514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.420571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.420618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.420667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.420721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.420778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.421110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.421160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.421203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.421248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.421292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.421334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.421373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.421406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.421452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.421497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.421545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.421587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.421633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.421680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.421712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.421751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.421807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.421859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.421908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.421956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.422008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.422054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.422107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.422156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.422205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.422257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.422315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.422363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.422415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.422464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.422512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.422559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.422619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.422677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.422725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.422777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.422828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.422877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.422925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.422975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.423028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.423079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.423127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.423175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.423226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.423275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.423321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.423364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.423416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.423461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.423512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.423555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.423587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.423624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.423668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.423722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.423769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.423821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.105 [2024-07-24 22:59:11.423870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.423911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.423958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.423997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.424031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.424343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.424400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.424451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.424497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.424552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.424599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.424649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.424706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.424762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.424807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.424853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.424897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.424940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.424981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.425998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.426048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.426101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.426150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.426199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.426248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.426300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.426348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.426397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.426448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.426500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.426549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.426600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.426652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.426700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.426753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.426808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.426858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.426908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.426958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.427006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.427054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.427099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.427145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.427187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:39.106 [2024-07-24 22:59:11.427505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.427552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.427598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.427639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.427674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.427720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.427765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.427808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.427845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.427889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.427935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.427988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.428032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.428079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.428128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.428175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.428230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.428278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.428324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.428371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.428427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.428479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.106 [2024-07-24 22:59:11.428529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.428578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.428628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.428682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.428738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.428790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.428839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.428894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.428946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.428998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.429052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.429104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.429155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.429207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.429256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.429304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.429359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.429414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.429463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.429513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.429566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.429614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.429667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.429721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.429774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.429823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.429875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.429924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.429975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.430019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.430064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.430114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.430160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.430205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.430254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.430296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.430344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.430377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.430418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.430468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.430517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.430824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.430867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.430908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.430942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.430975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.431972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.432012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.432052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.432088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.432139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.432195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.432249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.107 [2024-07-24 22:59:11.432298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.432354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.432404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.432452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.432501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.432549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.432594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.432648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.432692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.432742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.432785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.432821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.432864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.432906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.432943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.432993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.433039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.433094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.433143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.433192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.433243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.433292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.433343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.433670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.433728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.433782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.433831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.433881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.433932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.433983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.434034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.434085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.434138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.434190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.434241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.434293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.434348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.434411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.434464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.434511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.434561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.434610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.434659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.434713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.434769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.434820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.434870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.434926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.434979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.435028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.435081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.435127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.435175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.435227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.435279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.435335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.435390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.435439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.435490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.435542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.435597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.435648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.435702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.435756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.435812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.435864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.435915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.435966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.436014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.436058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.436101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.436144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.436188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.436232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.436272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.436318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.436361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.436393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.436434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.436477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.436526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.436571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.436618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.436664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.436710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.436758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.437094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.437136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.437177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.437210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.437252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.437297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.437338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.437379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.437412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.437447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.437480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.108 [2024-07-24 22:59:11.437512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.437545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.437578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.437609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.437640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.437671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.437705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.437741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.437774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.437806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.437838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.437868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.437902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.437934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.437966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.437997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.438967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.439017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.439067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.439118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.439169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.439216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.439269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.439320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.439370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.439424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.439474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.439529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.439868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.439915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.439958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.439996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.440028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.440073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.440117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.440162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.440207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.440259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.440315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.440374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.440427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.440473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.440525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.440573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.440623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.440675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.440728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.440775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.440821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.440872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.440919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.440969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.441022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.441073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.441125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.441172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.441221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.441274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.441323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.441372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.441426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.441490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.441539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.441585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.441634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.441688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.441738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.441786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.441839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.441889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.441934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.441985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.109 [2024-07-24 22:59:11.442036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.442084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.442137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.442190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.442237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.442290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.442341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.442390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.442437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.442490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.442541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.442596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.442646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.442698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.442756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.442804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.442849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.442893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.442933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.443244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.443288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.443328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.443375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.443423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.443470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.443519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.443551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.443594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.443643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.443684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.443737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.443782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.443832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.443874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.443918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.443950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.443982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.444957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.445005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.445053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.445107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.445152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.445202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.445255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.445310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.445359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.445404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.445451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.445497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.445535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.445576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.445623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.445667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.445709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.445754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.446090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.446140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.446193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.446243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.446299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.446361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.446410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.446455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.446506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.446555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.446604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.446658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.446705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.446761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.110 [2024-07-24 22:59:11.446814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.446864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.446911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.446964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.447018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.447070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.447121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.447169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.447215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.447271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.447320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.447365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.447417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.447469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.447529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.447580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.447630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.447679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.447733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.447779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.447829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.447881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.447936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.447986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.448034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.448088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.448136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.448191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.448238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.448289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.448338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.448388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.448439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.448491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.448540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.448589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.448641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.448686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.448736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.448788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.448836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.448886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.448931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.448973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.449017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.449056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.449098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.449145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.449194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.449530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.449577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.449622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.449656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.449696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.449747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.449794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.449842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.449892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.449934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.449976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.450975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.451006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.451037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.451083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.451127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.451162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.111 [2024-07-24 22:59:11.451195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.451236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.451277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.451320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.451359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.451405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.451459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.451512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.451560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.451611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.451661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.451713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.451769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.451825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.451868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.451918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.451981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.452032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.452079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.452419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.452470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.452516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.452566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.452618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.452668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.452727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.452773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.452825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.452872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.452922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.452975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.453021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.453073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.453126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.453184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.453234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.453289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.453333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.453386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.453431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.453475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.453518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.453564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.453603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.453636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.453678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.453728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.453772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.453823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.453867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.453917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.453968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.454016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.454049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.454086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.454120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.454164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.454205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.454253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.112 [2024-07-24 22:59:11.454297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.454346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.454394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.454446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.454494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.454544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.454596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.454652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.454702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.454756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.454809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.454864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.454911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.454964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.455019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.455068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.455117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.455166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.455215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.455268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.455324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.455372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.455420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.455749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.455806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.455858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.455910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.455956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.455996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.456990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.457993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.458042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.458092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.458140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.458193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.458240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.458297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.458646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.458699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.458754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.458813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.458858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.458909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.458963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.459012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.459064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.459117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.113 [2024-07-24 22:59:11.459170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.459218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.459266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.459317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.459367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.459424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.459477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.459528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.459580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.459633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.459675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.459721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.459768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.459814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.459856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.459898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.459941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.459974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.460974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.461026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.461083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.461136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.461185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.461234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.461285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.461333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.461383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.461436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.461484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.461535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.461584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.461635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.461972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.462988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.463023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.463058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.463091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.463122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.463164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.463215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.463251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.463281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.114 [2024-07-24 22:59:11.463314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.463344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.463374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.463405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.463437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.463468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.463499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.463530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.463561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.463594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.463629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.463660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.463691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.463727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.463759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.463789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.463820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.463866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.463904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.463934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.463966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.464011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.464054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.464098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.464141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.464187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.464247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.464307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.464356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.464406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.464457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.464508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.464859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.464917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.464967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.465017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.465068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.465116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.465168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.465223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.465276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.465332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.465382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.465434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.465498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.465545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.465594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.465641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.465692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.465747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.465799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.465847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.465896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.465948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.466995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.467030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.467073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.467119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.467169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.467224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.467275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.467321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.467372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.467427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.115 [2024-07-24 22:59:11.467481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.467527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.467577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.467622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.467676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.467734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.467783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.467840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.468173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.468223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.468276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.468327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.468374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.468420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.468469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.468518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.468574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.468623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.468679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.468726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.468773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.468821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.468861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.468893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.468935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.468987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.469963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.470017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.470068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.470120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.470170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.470222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.470273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.470327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.470375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.470423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.470472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.470520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.470573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.470618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.470666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.470711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.470749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.470796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.470837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.470882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.470926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.470972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.471014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.471056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.471427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.471471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.471518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.471567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.471621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.471670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.471726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.471775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.471821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.471871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.471924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.116 [2024-07-24 22:59:11.471979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.472033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.472085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.472135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.472190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.472239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.472290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.472348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.472403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.472452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.472496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.472536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.472582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.472624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.472666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.472717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.472751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.472791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.472844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.472892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.472930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.472968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.473000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.473044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.473095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.473151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.473206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.473255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.473307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.473358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.473410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.473460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.473509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.473564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.473613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.473663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.473712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.473764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.473814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.473872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.473924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.473977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.474027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.474081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.474130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.474178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.474219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.474263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.474303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.474336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.474376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.474421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.474741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.474779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.474819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.474865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.474898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.474949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.474997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.475047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.475097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.475151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.475205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.475253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.475300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.475351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.475398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:39.117 [2024-07-24 22:59:11.475447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.475502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.475550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.475600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.117 [2024-07-24 22:59:11.475656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.475713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.475767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.475815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.475862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.475911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.475964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.476979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.477021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.477065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.477106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.477139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.477172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.477214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.477256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.477297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.477329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.477362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.477414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.477461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.477506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.477561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.477611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.477662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.477997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.478051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.478102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.478157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.478216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.478269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.478316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.478369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.478419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.478467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.478523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.478576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.478625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.478673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.478718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.478753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.478790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.478832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.478880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.478925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.478971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.479015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.479060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.479103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.479145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.479178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.479215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.479258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.479302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.479345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.479378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.479412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.479462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.479505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.479551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.479593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.479638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.479690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.479751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.479799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.479849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.118 [2024-07-24 22:59:11.479902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.479954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.480002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.480049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.480101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.480154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.480203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.480256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.480306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.480355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.480412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.480463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.480508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.480558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.480615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.480664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.480711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.480766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.480818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.480873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.480925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.480978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.481308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.481352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.481394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.481438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.481473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.481512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.481555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.481596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.481639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.481688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.481743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.481793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.481845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.481878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.481915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.481960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.481994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.482979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.483976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.484025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.119 [2024-07-24 22:59:11.484077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.484130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.484464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.484517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.484566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.484614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.484663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.484721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.484774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.484822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.484870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.484923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.484968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.485984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.486027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.486061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.486096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.486141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.486186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.486226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.486273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.486321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.486371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.486420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.486472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.486520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.486574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.486623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.486675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.486730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.486782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.486832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.486889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.486938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.486991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.487040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.487090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.487145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.487190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.487241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.487290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.487344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.487394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.487708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.487764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.487810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.487843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.487873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.487904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.487936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.487982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.488957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.120 [2024-07-24 22:59:11.489003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.489047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.489088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.489130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.489178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.489235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.489285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.489333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.489381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.489435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.489485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.489539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.489588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.489638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.489686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.489732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.489776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.489822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.489866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.489900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.489932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.489968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.490017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.490066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.490116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.490173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.490227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.490272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.490324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.490374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.490421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.490476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.490533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.490585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.490926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.490975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.491027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.491076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.491124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.491181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.491235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.491282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.491334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.491386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.491444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.491488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.491540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.491590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.491644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.491696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.491746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.491794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.491843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.491887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.491929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.491974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.492016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.492060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.492092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.492132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.492176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.492221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.492266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.492311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.492358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.492405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.492452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.492490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.492522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.492572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.492619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.492659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.492698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.492743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.492778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.121 [2024-07-24 22:59:11.492819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.492857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.492905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.492949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.492998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.493048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.493098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.493156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.493208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.493256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.493306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.493357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.493406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.493464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.493514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.493567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.493615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.493666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.493719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.493774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.493823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.493871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.494995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.495045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.495093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.495145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.495198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.495248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.495300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.495348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.495404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.495463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.495518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.495568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.495614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.495665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.495712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.495767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.495816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.495865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.495914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.495966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.496016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.496058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.496103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.496146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.496192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.496244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.496296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.496336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.496369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.496413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.496448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.122 [2024-07-24 22:59:11.496486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.496528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.496571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.496613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.496662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.496722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.496774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.496826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.496878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.496934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.496984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.497040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.497407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.497459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.497507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.497558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.497607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.497662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.497718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.497769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.497817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.497866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.497913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.497967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.498020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.498066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.498120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.498169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.498215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.498267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.498315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.498365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.123 [2024-07-24 22:59:11.498412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.498456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.498500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.498541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.498575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.498611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.498655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.498696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.498745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.498794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.498839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.498882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.498926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.498959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.498995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.499047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.499089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.499130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.499164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.499199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.499233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.499283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.499320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.499363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.499406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.499440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.499487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.499540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.499590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.499638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.499690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.499748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.499797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.499844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.499892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.499943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.500000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.500051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.500103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.500154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.500202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.500251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.500305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.500619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.500656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.500691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.500729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.500772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.500812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.500862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.500909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.500963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.501016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.501068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.501113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.501167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.501216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.501271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.501325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.501373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.501424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.501471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.501517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.501569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.501621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.501677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.501735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.501785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.501836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.501877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.501912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.413 [2024-07-24 22:59:11.501943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.501986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.502999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.503050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.503097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.503146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.503198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.503250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.503298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.503351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.503403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.503453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.503500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.503549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.503605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.503943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.503992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.504046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.504080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.504122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.504166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.504208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.504253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.504294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.504338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.504388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.504435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.504475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.504509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.504552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.504601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.504635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.504682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.504738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.504789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.504842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.504894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.504951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.505986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.506040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.506096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.506147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.506202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.506254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.506303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.506354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.506408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.506462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.506517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.506565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.414 [2024-07-24 22:59:11.506606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.506645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.506678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.506723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.506770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.506816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.506863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.507205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.507251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.507293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.507333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.507371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.507425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.507472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.507525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.507576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.507630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.507676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.507732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.507783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.507836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.507884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.507930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.507983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.508033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.508081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.508139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.508199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.508257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.508308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.508355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.508404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.508459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.508509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.508564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.508611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.508663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.508711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.508764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.508813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.508864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.508913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.508963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.509960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.510006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.510058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.510111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.510168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.510220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.510582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.510640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.510695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.510751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.510804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.510853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.510902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.510954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.511005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.511055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.511094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.511129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.511164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.511209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.511255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.511300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.511343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.511386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.511426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.511469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.511503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.511536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.415 [2024-07-24 22:59:11.511571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.511611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.511652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.511694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.511744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.511794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.511837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.511887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.511938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.511985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.512035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.512087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.512137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.512183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.512234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.512289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.512334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.512384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.512436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.512488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.512537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.512590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.512642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.512694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.512746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.512799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.512847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.512895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.512947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.512998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.513044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.513097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.513141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.513180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.513212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.513254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.513297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.513346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.513390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.513439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.513485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.513792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.513839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.513881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.513925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.513958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.514013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.514065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.514112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.514161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.514214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.514263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.514316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.514368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.514416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.514469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.514521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.514571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.514619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.514667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.514723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.514774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.514822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.514873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.514920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.514966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.515962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.516004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.516040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.416 [2024-07-24 22:59:11.516074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.516117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.516163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.516203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.516246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.516280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.516316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.516364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.516418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.516471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.516531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.516580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.516631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.516686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.516740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.517069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.517121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.517179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.517237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.517290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.517335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.517385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.517437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.517491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.517525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.517561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.517606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.517649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.517693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.517740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.517783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.517816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.517857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.517897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.517940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.517972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.518018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.518062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.518095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.518155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.518205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.518264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.518324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.518370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.518426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.518477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.518524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.518574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.518625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.518681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.518735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.518782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.518833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.518885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.518944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.518988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.519045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.519098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.519149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.519198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.519254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.519305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.519356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.519407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.519457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.519508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.519563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.519608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.519659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.519710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.519763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.519813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.519863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.519919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.519980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.520026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.520077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.520135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.520448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.520495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.520540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.520583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.520626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.520666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.520710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.417 [2024-07-24 22:59:11.520755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.520788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.520835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.520880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.520931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.520977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.521993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.522997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.523043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.523096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.523146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.523197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.523250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.523301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.523635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.523684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.523738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.523790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.523835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.523870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.523909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.523955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.523998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.524046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.524088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.524134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.524180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.524224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.524268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.524302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.524346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.524379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.524412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.418 [2024-07-24 22:59:11.524454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.524494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.524538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.524580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.524626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.524678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.524731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.524784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.524838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.524888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.524939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.524992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.525042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.525093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.525136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.525189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.525233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.525268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.525303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.525338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.525373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.525420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.525474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.525522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.525575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.525628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.525680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.525731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.525782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.525835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.525889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.525940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.525989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.526044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.526094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.526147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.526194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.526249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.526300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.526352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.526402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.526457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.526509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.526561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.526878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.526926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.526970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.527013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.527051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.527085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.527132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.527173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.527209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.527255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.527298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.527335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.527378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:39.419 [2024-07-24 22:59:11.527434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.527487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.527535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.527588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.527638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.527688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.527740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.527793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.527848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.527899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.527955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.528009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.528059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.528113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.528164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.528217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.528275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.528322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.528365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.528422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.528466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.528509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.528542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.528585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.528628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.528672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.528718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.528762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.528795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.528827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.419 [2024-07-24 22:59:11.528874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.528923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.528969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.529016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.529070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.529127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.529177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.529225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.529275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.529329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.529385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.529438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.529489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.529537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.529588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.529642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.529695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.529751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.529804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.529850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.529893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.530195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.530242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.530283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.530323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.530370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.530406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.530439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.530484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.530534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.530585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.530631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.530680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.530735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.530786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.530839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.530892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.530940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.530986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.531981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.532031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.532078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.532125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.532177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.532228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.532278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.532331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.532377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.532426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.532480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.532534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.532584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.532631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.532680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.532725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.532777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.532813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.532846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.532891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.532933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.532976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.533015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.533058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.533392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.533446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.533499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.533552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.533609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.533654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.533705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.533759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.533807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.533855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.533902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.533955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.534002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.420 [2024-07-24 22:59:11.534054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.534105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.534149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.534196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.534236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.534282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.534316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.534347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.534392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.534433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.534478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.534524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.534558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.534596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.534636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.534674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.534707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.534759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.534808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.534858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.534909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.534961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.535013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.535069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.535122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.535167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.535219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.535267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.535317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.535365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.535418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.535483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.535528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.535575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.535625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.535675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.535734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.535776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.535818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.535861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.535895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.535929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.535974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.536015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.536057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.536108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.536142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.536182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.536224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.536259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.536291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.536638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.536697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.536753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.536801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.536852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.536906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.536956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.537993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.538039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.538088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.538136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.538188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.538234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.538282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.538333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.421 [2024-07-24 22:59:11.538380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.538430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.538483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.538530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.538577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.538620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.538662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.538695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.538748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.538791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.538835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.538872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.538917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.538963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.538998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.539031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.539087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.539143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.539195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.539244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.539303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.539351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.539403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.539465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.539518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.539570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.539926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.539981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.540958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.541006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.541055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.541103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.541156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.541204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.541254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.541303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.541352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.541407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.541460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.541515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.541563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.541617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.541666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.541711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.541765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.541816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.541874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.541930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.541983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.542032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.542083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.542128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.542175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.542218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.542252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.542289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.542342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.542394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.542436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.422 [2024-07-24 22:59:11.542481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.423 [2024-07-24 22:59:11.542527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.423 [2024-07-24 22:59:11.542570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.423 [2024-07-24 22:59:11.542621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.423 [2024-07-24 22:59:11.542659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.423 [2024-07-24 22:59:11.542695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.542741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.542789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.542823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.542858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.543208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.543263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.543313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.543362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.543414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.543468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.543521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.543581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.543633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.543690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.543748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.543800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.543858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.543908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.543963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.544017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.544066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.544118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.544168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.544219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.544267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.544315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.544364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.544418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.544464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.544514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.544565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.544613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.544658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.544711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.544760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.544804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.544844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.544877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.544916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.544961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.545980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.546034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.546086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.546141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.546192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.546489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.546536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.546581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.546628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.546673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.546708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.546744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.546778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.546822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.546864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.546907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.546952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.546985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.547016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.547046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.547081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.547113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.547145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.547175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.547205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.547235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.547267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.547298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.547329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.547372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.547413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.547455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.547501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.547553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.424 [2024-07-24 22:59:11.547603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.547658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.547705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.547763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.547817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.547866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.547915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.547966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.548020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.548072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.548125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.548182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.548235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.548285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.548335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.548390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.548438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.548487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.548531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.548572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.548613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.548663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.548706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.548757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.548797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.548830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.548868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.548909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.548948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.549005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.549058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.549110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.549161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.549211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.549260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.549602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.549657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.549703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.549764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.549816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.549864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.549913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.549966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.550016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.550073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.550126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.550178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.550227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.550276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.550326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.550375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.550435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.550486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.550539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.550591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.550641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.550692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.550748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.550799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.550847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.550896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.550950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.550999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.551055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.551112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.551162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.551220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.551273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.551324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.551376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.551429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.551478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.551525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.551566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.551611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.551654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.551702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.425 [2024-07-24 22:59:11.551756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.551798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.551840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.551874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.551911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.551957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.552003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.552047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.552094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.552137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.552183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.552224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.552262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.552295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.552339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.552384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.552427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.552465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.552518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.552564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.552607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.552899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.552935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.552967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.552997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.553963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.554007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.554055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.554092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.554123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.554154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.554195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.554236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.554287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.554339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.554395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.554444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.554495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.554542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.554590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.554642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.554702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.554760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.554811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.554858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.554908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.554955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.555008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.555056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.555114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.555171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.555217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.555271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.555319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.555370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.555423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.555474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.555862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.555910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.555955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.555996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.556028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.556079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.556124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.556168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.556209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.556255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.556288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.556342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.426 [2024-07-24 22:59:11.556388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.556444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.556500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.556549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.556598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.556652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.556704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.556761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.556829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.556874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.556923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.556974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.557022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.557078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.557130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.557177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.557224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.557274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.557324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.557375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.557427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.557476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.557532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.557579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.557629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.557681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.557733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.557788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.557841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.557893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.557944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.557999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.558055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.558103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.558154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.558199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.558248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.558298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.558353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.558405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.558454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.558494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.558543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.558587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.558633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.558675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.558721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.558763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.558798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.558834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.558883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.559983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.560014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.427 [2024-07-24 22:59:11.560046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.560996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.561041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.561094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.561146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.561193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.561251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.561300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.561350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.561398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.561445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.561501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.561550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.561606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.561655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.561706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.562057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.562115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.562161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.562210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.562254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.562297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.562343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.562387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.562432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.562471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.562509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.562542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.562587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.562633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.562676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.562725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.562771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.562804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.562855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.562902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.562949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.563002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.563056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.563104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 true 00:13:39.428 [2024-07-24 22:59:11.563154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.563203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.563258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.563309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.563357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.563405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.563454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.563506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.563562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.563612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.563665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.563712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.563771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.563822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.563868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.563917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.563966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.564019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.428 [2024-07-24 22:59:11.564069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.564122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.564176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.564231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.564281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.564334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.564384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.564435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.564482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.564531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.564583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.564633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.564685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.564738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.564789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.564842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.564892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.564942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.564985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.565026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.565073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.565392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.565439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.565488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.565532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.565573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.565616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.565663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.565701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.565739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.565780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.565827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.565870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.565903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.565935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.565968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.566997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.567048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.567095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.567146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.567200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.567256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.567307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.567352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.567394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.567443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.567493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.567541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.567584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.567628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.567672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.567722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.429 [2024-07-24 22:59:11.567763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.567804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.567846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.567879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.568226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.568279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.568329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.568379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.568431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.568487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.568536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.568587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.568636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.568686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.568742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.568791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.568843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.568894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.568945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.568998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.569051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.569104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.569153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.569203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.569256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.569305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.569354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.569404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.569452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.569509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.569557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.569603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.569657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.569710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.569764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.569814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.569869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.569917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.569966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.570014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.570063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.570113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.570168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.570217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.570269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.570317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.570370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.570435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.570481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.570528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.570580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.570633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.570686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.570748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.570798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.570848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.570900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.570945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.570987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.571030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.571071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.571116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.571161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.571196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.571237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.571281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.571327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.571632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.571675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.571723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.571769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.571820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.571862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.571907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.571950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.571984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.572024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.572067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.572110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.572152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.572199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.572241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.572274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.572318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.572351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.572386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.572420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.572469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.430 [2024-07-24 22:59:11.572526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.572576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.572626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.572676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.572727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.572778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.572826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.572879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.572927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.572975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.573953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.574002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.574049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.574094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.574141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.574189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.574240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.574293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.574341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.574395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.574443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.574785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.574838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.574884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.574937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.574986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.575999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.576042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.576089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.576141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.576190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.576246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.576292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.576338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.576387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.576441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.576493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.576540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.576589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.576642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.576692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.576748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.576797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.576851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.576896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.576945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.576995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.577046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.577097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.431 [2024-07-24 22:59:11.577147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.577196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.577244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.577297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.577345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.577393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.577439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.577489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.577539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.577590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.577637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.577683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.577731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.578044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.578091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.578132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.578178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:39.432 [2024-07-24 22:59:11.578224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.578257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.578297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.578333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.578366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.578400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.578447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.578498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.578551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.578603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.578649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.578695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.578744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.578793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.578853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.578896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.578930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.578960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.579975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.580022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.580077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.580125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.580174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.580223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.580276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.580328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.580375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.580427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.580480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.580526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.580574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.580629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.580680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.580734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.580785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.580831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.580882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.580934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.581274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.581319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.581352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.581394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.581435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.581481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.581525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.581566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.581613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.581666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.581718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.581763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.581795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.581834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.581876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.581912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.581953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.581995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.582037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.582072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.432 [2024-07-24 22:59:11.582114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.582170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.582221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.582269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.582317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.582367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.582419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.582469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.582519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.582568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.582618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.582666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.582720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.582773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.582821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.582875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.582921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.582969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.583020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.583069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.583122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.583178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.583225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.583277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.583330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.583380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.583437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.583486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.583539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.583588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.583635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.583684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.583739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.583791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.583840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.583889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.583941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.583992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.584045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.584088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.584134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.584176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.584220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.584551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.584593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.584633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.584685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.584723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.584763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.584805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.584853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.584894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.584943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.584977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.585012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.585050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.585094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.585127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.585158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.585189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.585218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.585249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.585279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.585312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.585344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.585389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.585435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.585483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.585536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.585588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.585634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.433 [2024-07-24 22:59:11.585682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.585742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.585791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.585845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.585892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.585940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.585994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.586046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.586099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.586143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.586185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.586232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.586271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.586304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.586337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.586381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.586420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.586462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.586503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.586546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.586592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.586644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.586698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.586753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.586802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.586855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.586908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.586960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.587006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.587059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.587108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 22:59:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:39.434 [2024-07-24 22:59:11.587160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.587221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.587271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.587322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.587372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 22:59:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.434 [2024-07-24 22:59:11.587729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.587785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.587836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.587891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.587942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.587993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.588044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.588094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.588145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.588197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.588244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.588291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.588337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.588388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.588438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.588488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.588539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.588592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.588635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.588685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.588734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.588779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.588822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.588856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.588893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.588937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.588980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.589966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.590015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.590068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.590115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.590166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.434 [2024-07-24 22:59:11.590215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.590264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.590317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.590370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.590425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.590477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.590528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.590577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.590622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.590666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.591992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.592033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.592074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.592113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.592158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.592199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.592248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.592296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.592352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.592404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.592456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.592506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.592558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.592607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.592659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.592712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.592768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.592816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.592862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.592912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.592961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.593018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.593060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.593108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.593157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.593204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.593248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.593281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.593316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.593358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.593408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.593446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.593479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.593527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.593575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.593626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.593677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.593735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.593791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.593835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.593886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.594224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.594283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.594328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.594373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.594422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.594475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.594532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.594588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.594636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.594681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.594734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.594790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.594843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.594894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.594945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.594997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.595049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.595094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.595138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.595181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.595226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.595271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.435 [2024-07-24 22:59:11.595315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.595360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.595394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.595440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.595488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.595536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.595579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.595625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.595672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.595720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.595767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.595801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.595836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.595875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.595918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.595953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.595988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.596033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.596065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.596115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.596163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.596219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.596271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.596327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.596376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.596429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.596481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.596532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.596581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.596630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.596693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.596744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.596793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.596846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.596896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.596945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.596999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.597045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.597094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.597147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.597195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.597552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.597601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.597644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.597679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.597717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.597752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.597785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.597828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.597869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.597913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.597960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.598991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.599048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.599097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.599145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.599194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.599245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.599297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.599353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.599404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.599456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.599510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.599562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.599613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.599667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.599722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.599781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.599827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.599878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.599921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.599966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.600010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.600045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.600077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.600125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.600168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.436 [2024-07-24 22:59:11.600216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.600264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.600308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.600347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.600383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.600428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.600471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.600819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.600874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.600922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.600981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.601034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.601088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.601140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.601189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.601237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.601290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.601346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.601394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.601449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.601497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.601548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.601595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.601643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.601686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.601729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.601775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.601821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.601864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.601902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.601934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.601978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.602023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.602066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.602110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.602145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.602186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.602229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.602262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.602296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.602339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.602386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.602432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.602483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.602535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.602581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.602628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.602678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.602733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.602783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.602841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.602899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.602952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.603001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.603053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.603102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.603157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.603208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.603257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.603311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.603362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.603409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.603442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.603482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.603524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.603565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.603617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.603666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.603701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.603748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.604075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.604128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.604179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.604229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.604281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.604337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.604383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.604427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.604470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.604503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.604538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.604582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.604619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.604662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.604702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.604739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.604775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.604831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.604880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.604930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.604985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.605032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.605085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.605136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.605184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.605239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.605291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.605341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.437 [2024-07-24 22:59:11.605388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.605444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.605499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.605544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.605596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.605647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.605696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.605753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.605806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.605853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.605898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.605939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.605982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.606960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.607009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.607353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.607409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.607457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.607510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.607560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.607610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.607667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.607717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.607768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.607820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.607878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.607927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.607978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.608030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.608079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.608127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.608175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.608229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.608283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.608332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.608380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.608429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.608477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.608527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.608573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.608620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.608664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.608710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.608752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.608785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.608825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.608872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.608916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.608963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.609007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.609052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.609096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.609137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.609170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.609211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.609254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.609294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.609345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.609390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.609435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.609476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.609508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.438 [2024-07-24 22:59:11.609543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.609585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.609629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.609665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.609697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.609742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.609794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.609844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.609907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.609959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.610012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.610059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.610109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.610160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.610217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.610274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.610604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.610650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.610682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.610728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.610773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.610814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.610861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.610906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.610953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.610993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.611997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.612046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.612087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.612120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.612152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.612202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.612256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.612312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.612362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.612418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.612472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.612527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.612577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.612627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.612677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.612738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.612795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.612843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.612900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.612948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.613000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.613055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.613105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.613154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.613206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.613256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.613305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.613353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.613408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.613469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.613516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.613568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.613884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.613944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.613993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.614035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.614068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.614107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.614150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.614185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.614232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.614282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.614328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.614378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.614430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.614482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.614535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.614583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.614631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.439 [2024-07-24 22:59:11.614683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.614740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.614790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.614838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.614892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.614941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.614995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.615964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.616013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.616058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.616107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.616157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.616208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.616264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.616316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.616368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.616418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.616470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.616516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.616565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.616617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.616665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.616709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.616748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.616786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.617100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.617137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.617170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.617216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.617260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.617309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.617351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.617411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.617461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.617512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.617561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.617611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.617668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.617722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.617768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.617821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.617873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.617930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.617982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.618032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.618082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.618139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.618184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.618231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.618286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.618342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.618393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.618441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.618495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.618546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.618598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.618657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.618707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.618756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.618801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.618849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.618891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.618923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.618967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.619007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.619051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.619092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.619133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.619178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.619219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.619263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.619311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.619343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.619385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.619428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.619463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.440 [2024-07-24 22:59:11.619507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.619547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.619593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.619628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.619681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.619733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.619783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.619836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.619890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.619943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.619994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.620044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.620097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.620431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.620483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.620534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.620586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.620635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.620683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.620739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.620788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.620840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.620892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.620943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.620993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.621994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.622030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.622075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.622124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.622173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.622222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.622270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.622319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.622371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.622422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.622471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.622522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.622572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.622619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.622669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.622722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.622774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.622825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.622874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.622923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.622975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.623023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.623073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.623119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.623164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.623206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.623250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.623294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.623594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.623632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.623675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.623718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.623762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.623810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.623861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.623911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.623963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.624013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.624066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.624111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.624160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.624214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.624259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.624308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.624363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.624413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.624463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.624516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.624568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.441 [2024-07-24 22:59:11.624618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.624670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.624725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.624769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.624815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.624857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.624899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.624931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.624968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.625989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.626039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.626085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.626133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.626187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.626240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.626292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.626338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.626388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.626447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.626492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.626544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.626887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.626942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.626991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.627966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.628017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.628067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.628116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.628165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.628214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.628267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.628319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.628376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.628421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.628467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.628520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.628571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.628620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.628672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.628721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.628771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.628824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.628873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.628933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.628984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.629036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.629084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.629137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.629187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.629233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.629286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.629332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.629378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.442 [2024-07-24 22:59:11.629431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.629474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.629515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.629548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.629586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.629629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.629672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.629724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:39.443 [2024-07-24 22:59:11.630032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.630076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.630117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.630157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.630198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.630230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.630284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.630332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.630385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.630432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.630484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.630534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.630590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.630644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.630693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.630746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.630794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.630843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.630899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.630961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.631009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.631064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.631113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.631164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.631215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.631267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.631317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.631366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.631415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.631465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.631515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.631564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.631618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.631670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.631717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.631765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.631815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.631863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.631914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.631966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.632979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.633330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.633391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.633441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.633491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.633539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.633589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.633642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.633703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.633765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.443 [2024-07-24 22:59:11.633816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.633864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.633909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.633954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.634979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.635979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.636021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.636055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.636095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.636141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.636182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.636514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.636567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.636618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.636665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.636711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.636764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.636820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.636884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.636932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.636980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.637031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.637084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.637133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.637192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.637243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.637291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.637340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.637390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.637440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.637492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.637542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.637592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.637639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.637686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.637739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.637798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.637849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.637902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.637953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.638001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.638049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.638092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.638130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.638165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.638216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.638261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.638309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.638351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.638392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.638441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.638488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.638534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.638569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.638601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.444 [2024-07-24 22:59:11.638650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.638692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.638740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.638784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.638829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.638862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.638895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.638942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.638990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.639029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.639069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.639117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.639170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.639223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.639270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.639316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.639366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.639417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.639461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.639514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.639857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.639914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.639965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.640962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.641016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.641070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.641121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.641166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.641213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.641264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.641316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.641364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.641421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.641469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.641516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.641566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.641621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.641671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.641726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.641782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.641835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.641885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.641935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.641970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.642002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.642047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.642099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.642148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.642194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.642236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.642280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.642326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.642372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.642407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.642450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.642497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.642530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.642563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.642604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.642645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.642690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.643031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.643083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.643132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.643183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.643235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.643282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.643333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.643387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.643445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.643502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.643559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.643612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.643666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.643719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.445 [2024-07-24 22:59:11.643766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.643819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.643872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.643924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.643971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.644020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.644078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.644122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.644168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.644215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.644275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.644327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.644374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.644425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.644473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.644523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.644575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.644625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.644674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.644708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.644748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.644792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.644839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.644881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.644926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.644973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.645955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.646009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.646057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.646365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.646411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.646453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.646494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.646538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.646573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.646606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.646641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.646678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.646724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.646769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.646811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.646844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.646879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.646910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.646941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.646973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.647979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.648034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.648084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.648132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.648184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.648226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.648267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.648317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.648361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.446 [2024-07-24 22:59:11.648402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.648445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.648488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.648522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.648562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.648606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.648645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.648688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.648735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.648774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.648818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.648865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.648918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.648971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.649021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.649075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.649400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.649452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.649498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.649548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.649600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.649655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.649700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.649755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.649807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.649861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.649911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.649958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.650009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.650062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.650107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.650157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.650209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.650258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.650307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.650365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.650417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.650469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.650518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.650572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.650627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.650673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.650725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.650775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.650825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.650880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.650940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.650988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.651037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.651084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.651134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.651184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.651236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.651281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.651338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.651383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.651425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.651471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.651518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.651566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.651613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.651664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.651709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.651745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.651792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.651839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.651883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.651928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.651974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.652014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.652059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.652104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.652137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.652176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.652220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.652270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.652324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.652365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.652414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.652459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.652778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.652812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.652844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.447 [2024-07-24 22:59:11.652875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.652906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.652935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.652966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.652998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.653982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.654034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.654086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.654135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.654186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.654238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.654283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.654333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.654377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.654425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.654471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.654503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.654547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.654592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.654638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.654688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.654742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.654796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.654846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.654896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.654946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.654998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.655047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.655097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.655151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.655203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.655256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.655306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.655356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.655699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.655756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.655800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.655848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.655900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.655949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.656000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.656048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.656095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.656145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.656196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.656245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.656300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.656347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.656399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.656448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.656503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.656557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.656601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.656649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.656711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.656765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.656818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.656864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.656905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.656945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.656991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.657038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.657083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.657132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.657179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.657213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.657257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.657309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.657359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.657400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.657442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.657484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.448 [2024-07-24 22:59:11.657530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.657570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.657611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.657649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.657696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.657745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.657792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.657837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.657870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.657903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.657938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.657990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.658035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.658086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.658131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.658186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.658238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.658286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.658333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.658380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.658433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.658480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.658530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.658578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.658628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.658678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.658989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.659972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.660022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.660071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.660114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.660167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.660210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.660254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.660298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.660334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.660372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.660410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.660452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.660493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.660536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.660581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.660639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.660690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.660744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.660796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.660841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.660891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.660950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.661002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.661058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.661106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.661153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.661205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.661258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.661308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.661361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.661413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.661465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.661514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.661556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.661597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.661644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.661686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.661730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.661776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.662157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.662212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.662261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.662309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.662357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.662413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.662464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.662518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.449 [2024-07-24 22:59:11.662567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.662615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.662674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.662727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.662780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.662830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.662881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.662935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.662987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.663036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.663087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.663139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.663205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.663257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.663308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.663356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.663405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.663459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.663523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.663571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.663628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.663681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.663734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.663788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.663836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.663888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.663936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.663989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.664992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.665033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.665077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.665129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.665175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.665209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.665243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.665580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.665641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.665688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.665741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.665790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.665840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.665888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.665921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.665952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.665993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.666971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.667024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.667077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.667130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.667182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.667228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.667286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.450 [2024-07-24 22:59:11.667337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.667386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.667437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.667491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.667542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.667598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.667646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.667694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.667744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.667798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.667850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.667893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.667943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.667988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.668039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.668081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.668124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.668171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.668213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.668247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.668288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.668339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.668735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.668789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.668835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.668884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.668937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.668985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.669034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.669083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.669135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.669190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.669242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.669294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.669345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.669399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.669450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.669502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.669555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.669602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.669651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.669706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.669762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.669811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.669864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.669914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.669970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.670016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.670067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.670116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.670166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.670228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.670273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.670320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.670369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.670420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.670471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.670518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.670570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.670628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.670679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.670733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.670782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.670833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.670886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.670934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.670986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.671035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.671083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.671129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.671182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.671228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.671271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.671319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.671359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.671408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.671453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.671491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.671524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.671573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.671616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.671664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.671718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.671762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.671807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.671850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.672187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.672223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.672254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.672286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.672319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.672351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.672382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.672413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.672444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.672475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.672506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.672538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.451 [2024-07-24 22:59:11.672569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.672601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.672633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.672663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.672694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.672733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.672767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.672800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.672831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.672862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.672896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.672927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.672958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.672989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.673977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.674021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.674059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.674101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.674138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.674176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.674219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.674267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.674317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.674370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.674422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.674470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.674518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.674566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.674614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.674663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.675003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.675057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.675109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.675162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.675217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.675268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.675319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.675367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.675414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.675467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.675531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.675581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.675631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.675680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.675737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.675788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.675843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.675892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.675939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.675987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.676040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.676089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.676136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.676184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.676237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.676291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.676341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.676390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.676440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.676491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.676542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.452 [2024-07-24 22:59:11.676589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.676631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.676682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.676737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.676783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.676829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.676881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.676926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.676959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.677959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.678008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.678361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.678408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.678453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.678493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.678527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.678569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.678616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.678659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:39.453 [2024-07-24 22:59:11.678701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.678744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.678777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.678811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.678847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.678880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.678911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.678954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.678995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.679978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.680026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.680076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.680125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.680177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.680229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.680277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.680325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.680377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.680433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.680482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.680531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.680581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.680631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.680685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.680740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.680783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.680826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.680868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.680919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.680965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.680999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.681043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.681087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.453 [2024-07-24 22:59:11.681422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.681469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.681511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.681554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.681587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.681629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.681677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.681732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.681784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.681840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.681884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.681931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.681982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.682035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.682084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.682133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.682188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.682239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.682286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.682335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.682385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.682438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.682490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.682544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.682596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.682646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.682696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.682747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.682799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.682848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.682900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.682954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.683005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.683057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.683110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.683159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.683210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.683258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.683311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.683361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.683409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.683459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.683510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.683566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.683616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.683664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.683717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.683765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.683812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.683859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.683901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.683943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.683994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.684027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.684061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.684104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.684150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.684195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.684239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.684283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.684326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.684382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.684420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.684456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.684763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.684799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.684840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.684873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.684915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.684962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.685987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.686019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.686071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.686110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.686144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.686178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.686215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.454 [2024-07-24 22:59:11.686259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.686300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.686343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.686391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.686443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.686502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.686552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.686601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.686654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.686708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.686759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.686804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.686849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.686894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.686936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.686970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.687002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.687042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.687083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.687127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.687165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.687216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.687267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.687326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.687455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.687509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.687566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.687690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.687750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.688092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.688143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.688194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.688247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.688297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.688345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.688394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.688442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.688495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.688551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.688615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.688662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.688712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.688771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.688825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.688877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.688926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.688985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.689975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.690962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.691016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.691066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.691401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.691441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.455 [2024-07-24 22:59:11.691476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.691518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.691562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.691601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.691646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.691689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.691733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.691787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.691822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.691860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.691893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.691924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.691956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.691996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.692036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.692080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.692122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.692165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.692218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.692272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.692321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.692373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.692426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.692485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.692543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.692594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.692642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.692688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.692736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.692785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.692829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.692878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.692912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.692947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.692985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.693027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.693063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.693118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.693169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.693221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.693273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.693322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.693377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.693431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.693493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.693541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.693591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.693643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.693691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.693744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.693797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.693843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.693892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.693938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.693991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.694044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.694104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.694151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.694202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.694248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.694299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.694643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.694696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.694751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.694810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.694870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.694923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.694973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.695022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.695073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.695123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.695177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.695220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.695270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.695323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.695364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.695397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.695439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.695481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.695527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.695580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.695627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.695670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.456 [2024-07-24 22:59:11.695713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.695761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.695796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.695832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.695877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.695919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.695963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.696968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.697019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.697074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.697125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.697175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.697221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.697271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.697324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.697372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.697404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.697444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.697492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.697536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.697579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.697888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.697935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.697977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.698019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.698054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.698110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.698155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.698208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.698256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.698314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.698367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.698419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.698470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.698524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.698574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.698628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.698676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.698734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.698781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.698834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.698882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.698937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.698988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.699999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.700052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.700100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.700150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.700202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.700256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.700303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.700349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.700396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.700440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.700473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.457 [2024-07-24 22:59:11.700509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.700556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.700599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.700639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.700680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.700722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.700755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.700790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.701121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.701175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.701230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.701280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.701336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.701386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.701438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.701488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.701543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.701607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.701654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.701709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.701769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.701818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.701863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.701909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.701952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.701997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.702988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.703986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.704044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.704094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.704443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.704499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.704547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.704599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.704647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.704698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.704752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.704805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.704850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.704901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.704950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.704983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.705021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.705063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.705110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.705153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.705196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.705231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.705269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.705312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.705345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.705378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.705429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.705475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.705523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.705574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.458 [2024-07-24 22:59:11.705623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.705670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.705734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.705788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.705839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.705888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.705936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.705991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.706043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.706096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.706145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.706190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.706234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.706279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.706316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.706349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.706403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.706450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.706485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.706528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.706574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.706612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.706648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.706695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.706753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.706803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.706857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.706909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.706967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.707016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.707065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.707110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.707164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.707212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.707259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.707312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.707360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.707709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.707765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.707808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.707854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.707896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.707930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.707968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.708996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.709978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.710026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.710076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.710129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.710179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.710232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.710280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.710336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.710391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.710439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.459 [2024-07-24 22:59:11.710490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.710537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.710588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.710638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.710956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.711984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.712032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.712080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.712133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.712183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.712234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.712281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.712333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.712380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.712434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.712490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.712542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.712591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.712636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.712683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.712732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.712766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.712806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.712856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.712900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.712941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.712987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.713029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.713072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.713121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.713155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.713187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.713224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.713263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.713301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.713337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.713383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.713438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.713494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.713544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.713593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.713642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.713700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.713756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.713803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.713853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.714208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.460 [2024-07-24 22:59:11.714260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.714311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.714361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.714416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.714468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.714520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.714569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.714621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.714677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.714730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.714776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.714823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.714879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.714934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.714990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.715986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.716018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.716060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.716101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.716135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.716186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.716239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.716292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.716339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.716391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.716444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.716495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.716541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.716590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.716640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.716694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.716757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.716808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.716860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.716909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.716958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.717007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.717065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.717105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.717156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.717189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.717510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.717548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.717583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.717624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.717668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.717711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.717751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.717784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.717814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.717860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.717907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.717957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.718010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.718062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.718115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.718164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.461 [2024-07-24 22:59:11.718215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.718269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.718318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.718369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.718410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.718453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.718492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.718526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.718570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.718610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.718651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.718695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.718732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.718782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.718830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.718887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.718940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.718991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.719038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.719087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.719142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.719193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.719243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.719295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.719347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.719397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.719448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.719498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.719554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.719606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.719653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.719697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.719735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.719773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.719817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.719865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.719908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.719950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.719986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.720041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.720093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.720142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.720192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.720242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.720290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.720342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.720392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.720725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.720784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.720831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.720883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.720933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.720982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.721033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.721084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.721135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.721185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.721234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.721289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.721344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.721396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.721442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.721495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.721541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.721586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.721630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.721676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.721731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.721776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.721827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.721872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.721905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.721940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.721992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.722999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.723046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.462 [2024-07-24 22:59:11.723094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.723139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.723191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.723243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.723293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.723347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.723405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.723461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.723511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.723560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.723609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.723652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.723685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.723726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.724055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.724096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.724141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.724180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.724214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.724254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.724299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.724344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.724387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.724421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.724469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.724515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.724569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.724621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.724676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.724730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.724779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.724829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.724880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.724933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.724980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.725030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.725080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.725118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.725150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.725184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.725216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.725274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.725326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.725390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.725437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.725493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.725544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.725596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.725649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.725702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.725758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.725815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.725866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.725913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.725965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.726014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.726065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.726120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.726177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.726238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.726291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.726341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.726389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.726439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.726490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.726535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.726579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.726630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.726665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.726704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.726754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.726803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.726853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.726899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.726943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.726987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.727035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.727462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.727504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.727555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.727603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.727654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.727718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.727773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.727822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.727870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.727921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.727974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.728025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.728076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.728129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.728187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.728234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.728281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.728330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.728381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.463 [2024-07-24 22:59:11.728435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.728485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.728536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.728588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.728637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.728691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.728743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.728797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.728851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.728904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.728953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.729967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.730010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.730057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.730100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.730141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.730177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.730213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.730249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.730289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.730327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.730361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.730410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.730471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.730867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.730923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.730970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:39.464 [2024-07-24 22:59:11.731065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.731977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.732984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.733038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.733087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.733137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.464 [2024-07-24 22:59:11.733189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.465 [2024-07-24 22:59:11.733240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.465 [2024-07-24 22:59:11.733289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.465 [2024-07-24 22:59:11.733339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.465 [2024-07-24 22:59:11.733396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.465 [2024-07-24 22:59:11.733449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.465 [2024-07-24 22:59:11.733502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.465 [2024-07-24 22:59:11.733554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.465 [2024-07-24 22:59:11.733608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.465 [2024-07-24 22:59:11.733658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.465 [2024-07-24 22:59:11.733709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:39.465 22:59:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.724 22:59:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:39.724 22:59:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:39.724 true 00:13:39.724 22:59:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:39.724 22:59:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.983 22:59:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.242 22:59:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:40.242 22:59:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:40.242 true 00:13:40.242 22:59:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:40.242 22:59:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.501 22:59:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.761 22:59:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:40.761 22:59:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:40.761 true 00:13:40.761 22:59:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:40.761 22:59:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.020 22:59:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.277 22:59:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:41.278 22:59:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:41.278 true 00:13:41.278 22:59:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:41.278 22:59:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.537 22:59:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.796 22:59:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:41.796 22:59:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:41.796 true 00:13:41.796 22:59:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:41.796 22:59:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.057 22:59:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.316 22:59:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:42.316 22:59:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:42.316 true 00:13:42.316 22:59:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:42.316 22:59:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.576 22:59:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.835 22:59:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:42.835 22:59:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:42.835 true 00:13:42.835 22:59:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:42.835 22:59:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.094 22:59:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.353 22:59:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:43.353 22:59:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:43.353 true 00:13:43.353 22:59:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:43.353 22:59:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.613 22:59:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.872 22:59:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:43.872 22:59:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:43.872 true 00:13:44.132 22:59:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:44.132 22:59:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.132 22:59:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.392 22:59:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:44.392 22:59:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:44.392 true 00:13:44.392 22:59:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:44.392 22:59:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.651 22:59:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.910 22:59:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:44.910 22:59:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:44.910 true 00:13:44.910 22:59:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:44.910 22:59:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.170 22:59:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.430 22:59:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:45.430 22:59:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:45.430 true 00:13:45.430 22:59:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:45.430 22:59:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.809 22:59:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.809 22:59:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:46.809 22:59:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:47.068 true 00:13:47.068 22:59:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:47.068 22:59:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.004 22:59:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.004 22:59:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:48.004 22:59:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:48.262 true 00:13:48.262 22:59:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:48.262 22:59:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.521 22:59:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.521 22:59:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:48.521 22:59:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:48.780 true 00:13:48.780 22:59:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:48.780 22:59:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.040 22:59:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.040 22:59:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:49.040 22:59:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:49.299 true 00:13:49.299 22:59:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:49.299 22:59:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.558 22:59:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.558 22:59:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:49.558 22:59:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:49.816 true 00:13:49.816 22:59:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:49.816 22:59:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.192 22:59:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.192 22:59:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:51.192 22:59:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:51.451 true 00:13:51.451 22:59:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:51.451 22:59:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.388 22:59:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.388 22:59:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:52.388 22:59:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:52.388 true 00:13:52.647 22:59:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:52.647 22:59:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.647 22:59:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.906 22:59:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:52.906 22:59:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:53.165 true 00:13:53.165 22:59:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:53.165 22:59:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.103 22:59:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.362 22:59:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:54.362 22:59:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:54.621 true 00:13:54.621 22:59:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:54.621 22:59:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.559 22:59:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.559 22:59:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:55.559 22:59:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:55.817 true 00:13:55.817 22:59:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:55.817 22:59:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.077 22:59:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.077 22:59:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:56.077 22:59:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:56.336 true 00:13:56.336 22:59:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:56.336 22:59:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.715 22:59:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.715 22:59:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:57.715 22:59:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:57.715 true 00:13:57.974 22:59:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:57.974 22:59:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.543 22:59:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.802 22:59:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:58.802 22:59:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:59.061 true 00:13:59.061 22:59:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:59.061 22:59:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.319 22:59:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.319 22:59:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:13:59.319 22:59:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:13:59.579 true 00:13:59.579 22:59:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:13:59.579 22:59:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.840 22:59:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.840 22:59:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:13:59.840 22:59:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:00.160 true 00:14:00.160 22:59:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:14:00.160 22:59:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.160 22:59:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.419 22:59:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:14:00.419 22:59:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:00.680 true 00:14:00.680 22:59:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:14:00.680 22:59:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.065 22:59:34 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.065 22:59:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:14:02.065 22:59:34 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:02.065 true 00:14:02.065 22:59:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:14:02.065 22:59:34 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.002 22:59:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:03.260 22:59:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:14:03.260 22:59:35 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:03.260 true 00:14:03.260 22:59:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:14:03.261 22:59:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.519 22:59:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.778 22:59:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:14:03.778 22:59:35 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:03.778 true 00:14:03.778 22:59:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:14:03.778 22:59:36 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.156 22:59:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.156 22:59:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:14:05.156 22:59:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:14:05.416 true 00:14:05.416 22:59:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:14:05.416 22:59:37 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.354 Initializing NVMe Controllers 00:14:06.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:06.354 Controller IO queue size 128, less than required. 00:14:06.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:06.354 Controller IO queue size 128, less than required. 00:14:06.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:06.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:06.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:06.354 Initialization complete. Launching workers. 00:14:06.354 ======================================================== 00:14:06.354 Latency(us) 00:14:06.354 Device Information : IOPS MiB/s Average min max 00:14:06.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2213.33 1.08 32418.07 1245.70 1067159.44 00:14:06.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15229.30 7.44 8405.68 1967.20 284455.29 00:14:06.354 ======================================================== 00:14:06.354 Total : 17442.63 8.52 11452.66 1245.70 1067159.44 00:14:06.354 00:14:06.354 22:59:38 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.354 22:59:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:14:06.354 22:59:38 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:14:06.614 true 00:14:06.614 22:59:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3139355 00:14:06.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3139355) - No such process 00:14:06.614 22:59:38 -- target/ns_hotplug_stress.sh@53 -- # wait 3139355 00:14:06.614 22:59:38 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.614 22:59:39 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:06.873 22:59:39 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:06.873 22:59:39 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:06.873 22:59:39 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:06.873 22:59:39 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:06.873 22:59:39 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:07.133 null0 00:14:07.133 22:59:39 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:07.133 22:59:39 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:07.133 22:59:39 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:07.133 null1 00:14:07.133 22:59:39 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:07.133 22:59:39 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:07.133 22:59:39 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:07.392 null2 00:14:07.392 22:59:39 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:07.392 22:59:39 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:07.392 22:59:39 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:07.651 null3 00:14:07.651 22:59:39 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:07.651 22:59:39 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:07.651 22:59:39 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:07.651 null4 00:14:07.651 22:59:40 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:07.651 22:59:40 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:07.651 22:59:40 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:07.911 null5 00:14:07.911 22:59:40 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:07.911 22:59:40 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:07.911 22:59:40 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:08.171 null6 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:08.171 null7 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.171 22:59:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@66 -- # wait 3145166 3145168 3145172 3145175 3145178 3145181 3145184 3145187 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:08.431 22:59:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.691 22:59:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.951 22:59:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:09.210 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:09.210 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:09.210 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:09.210 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:09.210 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:09.210 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:09.210 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:09.210 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.470 22:59:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.729 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.989 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:10.248 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.248 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.248 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:10.248 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.248 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.248 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:10.248 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.248 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:10.248 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:10.248 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:10.249 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:10.249 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:10.249 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:10.249 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:10.508 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.508 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.508 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:10.509 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.768 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:10.768 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:10.768 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:10.768 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:10.768 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:10.768 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:10.768 22:59:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:10.768 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.768 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.768 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:10.768 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.768 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.768 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:10.768 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.768 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.768 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:10.768 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.768 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.768 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:10.768 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.768 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.769 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:10.769 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.769 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.769 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:10.769 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.769 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.769 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:10.769 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.769 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.769 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:11.028 22:59:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.028 22:59:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:11.028 22:59:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:11.028 22:59:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:11.028 22:59:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:11.028 22:59:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:11.028 22:59:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:11.028 22:59:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:11.287 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.287 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.287 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.287 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.287 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:11.287 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:11.287 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.287 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.287 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:11.287 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.287 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.287 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:11.287 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.287 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.287 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:11.287 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.288 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.288 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:11.288 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.288 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.288 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:11.288 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.288 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.288 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:11.288 22:59:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:11.288 22:59:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:11.288 22:59:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:11.288 22:59:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:11.288 22:59:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:11.288 22:59:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.288 22:59:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:11.288 22:59:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:11.547 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.547 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.548 22:59:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.808 22:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.072 22:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.072 22:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.072 22:59:44 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:12.072 22:59:44 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:12.072 22:59:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:12.072 22:59:44 -- nvmf/common.sh@116 -- # sync 00:14:12.072 22:59:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:12.072 22:59:44 -- nvmf/common.sh@119 -- # set +e 00:14:12.072 22:59:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:12.072 22:59:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:12.072 rmmod nvme_tcp 00:14:12.072 rmmod nvme_fabrics 00:14:12.072 rmmod nvme_keyring 00:14:12.072 22:59:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:12.072 22:59:44 -- nvmf/common.sh@123 -- # set -e 00:14:12.072 22:59:44 -- nvmf/common.sh@124 -- # return 0 00:14:12.073 22:59:44 -- nvmf/common.sh@477 -- # '[' -n 3138842 ']' 00:14:12.073 22:59:44 -- nvmf/common.sh@478 -- # killprocess 3138842 00:14:12.073 22:59:44 -- common/autotest_common.sh@926 -- # '[' -z 3138842 ']' 00:14:12.073 22:59:44 -- common/autotest_common.sh@930 -- # kill -0 3138842 00:14:12.073 22:59:44 -- common/autotest_common.sh@931 -- # uname 00:14:12.073 22:59:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:12.073 22:59:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3138842 00:14:12.073 22:59:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:12.073 22:59:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:12.073 22:59:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3138842' 00:14:12.073 killing process with pid 3138842 00:14:12.073 22:59:44 -- common/autotest_common.sh@945 -- # kill 3138842 00:14:12.073 22:59:44 -- common/autotest_common.sh@950 -- # wait 3138842 00:14:12.354 22:59:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:12.354 22:59:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:12.354 22:59:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:12.354 22:59:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:12.354 22:59:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:12.354 22:59:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.355 22:59:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.355 22:59:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.262 22:59:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:14.262 00:14:14.262 real 0m48.913s 00:14:14.262 user 3m10.012s 00:14:14.262 sys 0m21.202s 00:14:14.262 22:59:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:14.262 22:59:46 -- common/autotest_common.sh@10 -- # set +x 00:14:14.262 ************************************ 00:14:14.262 END TEST nvmf_ns_hotplug_stress 00:14:14.262 ************************************ 00:14:14.262 22:59:46 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:14.262 22:59:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:14.262 22:59:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:14.262 22:59:46 -- common/autotest_common.sh@10 -- # set +x 00:14:14.262 ************************************ 00:14:14.262 START TEST nvmf_connect_stress 00:14:14.262 ************************************ 00:14:14.262 22:59:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:14.522 * Looking for test storage... 00:14:14.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:14.522 22:59:46 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:14.522 22:59:46 -- nvmf/common.sh@7 -- # uname -s 00:14:14.522 22:59:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.522 22:59:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.522 22:59:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.522 22:59:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.522 22:59:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.522 22:59:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.522 22:59:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.522 22:59:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.522 22:59:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.522 22:59:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.522 22:59:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:14.522 22:59:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:14.522 22:59:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.522 22:59:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.522 22:59:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:14.522 22:59:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:14.522 22:59:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.522 22:59:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.522 22:59:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.522 22:59:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.522 22:59:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.523 22:59:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.523 22:59:46 -- paths/export.sh@5 -- # export PATH 00:14:14.523 22:59:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.523 22:59:46 -- nvmf/common.sh@46 -- # : 0 00:14:14.523 22:59:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:14.523 22:59:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:14.523 22:59:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:14.523 22:59:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.523 22:59:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.523 22:59:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:14.523 22:59:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:14.523 22:59:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:14.523 22:59:46 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:14.523 22:59:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:14.523 22:59:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.523 22:59:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:14.523 22:59:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:14.523 22:59:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:14.523 22:59:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.523 22:59:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.523 22:59:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.523 22:59:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:14.523 22:59:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:14.523 22:59:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:14.523 22:59:46 -- common/autotest_common.sh@10 -- # set +x 00:14:21.095 22:59:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:21.095 22:59:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:21.095 22:59:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:21.095 22:59:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:21.095 22:59:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:21.095 22:59:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:21.095 22:59:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:21.095 22:59:53 -- nvmf/common.sh@294 -- # net_devs=() 00:14:21.095 22:59:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:21.095 22:59:53 -- nvmf/common.sh@295 -- # e810=() 00:14:21.095 22:59:53 -- nvmf/common.sh@295 -- # local -ga e810 00:14:21.095 22:59:53 -- nvmf/common.sh@296 -- # x722=() 00:14:21.095 22:59:53 -- nvmf/common.sh@296 -- # local -ga x722 00:14:21.095 22:59:53 -- nvmf/common.sh@297 -- # mlx=() 00:14:21.095 22:59:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:21.095 22:59:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.095 22:59:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.095 22:59:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.095 22:59:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.095 22:59:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.095 22:59:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.095 22:59:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.095 22:59:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.095 22:59:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.095 22:59:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.095 22:59:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.095 22:59:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:21.095 22:59:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:21.095 22:59:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:21.095 22:59:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:21.095 22:59:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:21.095 22:59:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:21.095 22:59:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:21.095 22:59:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:21.095 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:21.095 22:59:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:21.095 22:59:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:21.095 22:59:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.095 22:59:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.095 22:59:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:21.095 22:59:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:21.095 22:59:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:21.095 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:21.095 22:59:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:21.095 22:59:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:21.095 22:59:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.095 22:59:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.095 22:59:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:21.095 22:59:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:21.095 22:59:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:21.095 22:59:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:21.095 22:59:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:21.095 22:59:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.095 22:59:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:21.095 22:59:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.095 22:59:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:21.095 Found net devices under 0000:af:00.0: cvl_0_0 00:14:21.095 22:59:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.095 22:59:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:21.095 22:59:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.095 22:59:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:21.095 22:59:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.095 22:59:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:21.095 Found net devices under 0000:af:00.1: cvl_0_1 00:14:21.095 22:59:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.095 22:59:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:21.095 22:59:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:21.095 22:59:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:21.095 22:59:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:21.095 22:59:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:21.095 22:59:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.095 22:59:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.095 22:59:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:21.095 22:59:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:21.095 22:59:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:21.095 22:59:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:21.095 22:59:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:21.095 22:59:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:21.095 22:59:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.095 22:59:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:21.095 22:59:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:21.095 22:59:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:21.095 22:59:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:21.095 22:59:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:21.095 22:59:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:21.095 22:59:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:21.095 22:59:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:21.095 22:59:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:21.095 22:59:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:21.095 22:59:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:21.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:14:21.095 00:14:21.095 --- 10.0.0.2 ping statistics --- 00:14:21.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.095 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:14:21.095 22:59:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:21.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:14:21.095 00:14:21.095 --- 10.0.0.1 ping statistics --- 00:14:21.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.095 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:14:21.095 22:59:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.095 22:59:53 -- nvmf/common.sh@410 -- # return 0 00:14:21.095 22:59:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:21.095 22:59:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.095 22:59:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:21.095 22:59:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:21.095 22:59:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.095 22:59:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:21.095 22:59:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:21.354 22:59:53 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:21.354 22:59:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:21.354 22:59:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:21.354 22:59:53 -- common/autotest_common.sh@10 -- # set +x 00:14:21.354 22:59:53 -- nvmf/common.sh@469 -- # nvmfpid=3149735 00:14:21.354 22:59:53 -- nvmf/common.sh@470 -- # waitforlisten 3149735 00:14:21.354 22:59:53 -- common/autotest_common.sh@819 -- # '[' -z 3149735 ']' 00:14:21.354 22:59:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.354 22:59:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:21.354 22:59:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.354 22:59:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:21.354 22:59:53 -- common/autotest_common.sh@10 -- # set +x 00:14:21.354 22:59:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:21.354 [2024-07-24 22:59:53.582476] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:21.354 [2024-07-24 22:59:53.582523] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.354 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.354 [2024-07-24 22:59:53.657404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:21.355 [2024-07-24 22:59:53.695306] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:21.355 [2024-07-24 22:59:53.695429] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.355 [2024-07-24 22:59:53.695439] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.355 [2024-07-24 22:59:53.695448] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.355 [2024-07-24 22:59:53.695544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.355 [2024-07-24 22:59:53.695626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:21.355 [2024-07-24 22:59:53.695628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.290 22:59:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:22.290 22:59:54 -- common/autotest_common.sh@852 -- # return 0 00:14:22.290 22:59:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:22.290 22:59:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:22.290 22:59:54 -- common/autotest_common.sh@10 -- # set +x 00:14:22.290 22:59:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.290 22:59:54 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:22.290 22:59:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.290 22:59:54 -- common/autotest_common.sh@10 -- # set +x 00:14:22.290 [2024-07-24 22:59:54.430151] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.290 22:59:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.290 22:59:54 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:22.290 22:59:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.290 22:59:54 -- common/autotest_common.sh@10 -- # set +x 00:14:22.290 22:59:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.290 22:59:54 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:22.290 22:59:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.290 22:59:54 -- common/autotest_common.sh@10 -- # set +x 00:14:22.290 [2024-07-24 22:59:54.465828] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.290 22:59:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.290 22:59:54 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:22.290 22:59:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.290 22:59:54 -- common/autotest_common.sh@10 -- # set +x 00:14:22.290 NULL1 00:14:22.290 22:59:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.290 22:59:54 -- target/connect_stress.sh@21 -- # PERF_PID=3149814 00:14:22.290 22:59:54 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:22.290 22:59:54 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:22.290 22:59:54 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:22.290 22:59:54 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:22.290 22:59:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.290 22:59:54 -- target/connect_stress.sh@28 -- # cat 00:14:22.290 22:59:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.290 22:59:54 -- target/connect_stress.sh@28 -- # cat 00:14:22.290 22:59:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.290 22:59:54 -- target/connect_stress.sh@28 -- # cat 00:14:22.290 22:59:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.290 22:59:54 -- target/connect_stress.sh@28 -- # cat 00:14:22.290 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.290 22:59:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.290 22:59:54 -- target/connect_stress.sh@28 -- # cat 00:14:22.290 22:59:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.290 22:59:54 -- target/connect_stress.sh@28 -- # cat 00:14:22.290 22:59:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.290 22:59:54 -- target/connect_stress.sh@28 -- # cat 00:14:22.290 22:59:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.290 22:59:54 -- target/connect_stress.sh@28 -- # cat 00:14:22.290 22:59:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.290 22:59:54 -- target/connect_stress.sh@28 -- # cat 00:14:22.290 22:59:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.290 22:59:54 -- target/connect_stress.sh@28 -- # cat 00:14:22.290 22:59:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.291 22:59:54 -- target/connect_stress.sh@28 -- # cat 00:14:22.291 22:59:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.291 22:59:54 -- target/connect_stress.sh@28 -- # cat 00:14:22.291 22:59:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.291 22:59:54 -- target/connect_stress.sh@28 -- # cat 00:14:22.291 22:59:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.291 22:59:54 -- target/connect_stress.sh@28 -- # cat 00:14:22.291 22:59:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.291 22:59:54 -- target/connect_stress.sh@28 -- # cat 00:14:22.291 22:59:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.291 22:59:54 -- target/connect_stress.sh@28 -- # cat 00:14:22.291 22:59:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.291 22:59:54 -- target/connect_stress.sh@28 -- # cat 00:14:22.291 22:59:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.291 22:59:54 -- target/connect_stress.sh@28 -- # cat 00:14:22.291 22:59:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.291 22:59:54 -- target/connect_stress.sh@28 -- # cat 00:14:22.291 22:59:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:22.291 22:59:54 -- target/connect_stress.sh@28 -- # cat 00:14:22.291 22:59:54 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:22.291 22:59:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.291 22:59:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.291 22:59:54 -- common/autotest_common.sh@10 -- # set +x 00:14:22.550 22:59:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.550 22:59:54 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:22.550 22:59:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.550 22:59:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.550 22:59:54 -- common/autotest_common.sh@10 -- # set +x 00:14:22.809 22:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.809 22:59:55 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:22.809 22:59:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.809 22:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.809 22:59:55 -- common/autotest_common.sh@10 -- # set +x 00:14:23.378 22:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.378 22:59:55 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:23.378 22:59:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.378 22:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.378 22:59:55 -- common/autotest_common.sh@10 -- # set +x 00:14:23.638 22:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.638 22:59:55 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:23.638 22:59:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.638 22:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.638 22:59:55 -- common/autotest_common.sh@10 -- # set +x 00:14:23.897 22:59:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.897 22:59:56 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:23.897 22:59:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.897 22:59:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.897 22:59:56 -- common/autotest_common.sh@10 -- # set +x 00:14:24.156 22:59:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.156 22:59:56 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:24.156 22:59:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.156 22:59:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.156 22:59:56 -- common/autotest_common.sh@10 -- # set +x 00:14:24.723 22:59:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.723 22:59:56 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:24.723 22:59:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.723 22:59:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.723 22:59:56 -- common/autotest_common.sh@10 -- # set +x 00:14:24.982 22:59:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.982 22:59:57 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:24.982 22:59:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.983 22:59:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.983 22:59:57 -- common/autotest_common.sh@10 -- # set +x 00:14:25.241 22:59:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.241 22:59:57 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:25.241 22:59:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.241 22:59:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.241 22:59:57 -- common/autotest_common.sh@10 -- # set +x 00:14:25.500 22:59:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.500 22:59:57 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:25.500 22:59:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.500 22:59:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.500 22:59:57 -- common/autotest_common.sh@10 -- # set +x 00:14:25.758 22:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.758 22:59:58 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:25.758 22:59:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.758 22:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.758 22:59:58 -- common/autotest_common.sh@10 -- # set +x 00:14:26.325 22:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.325 22:59:58 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:26.325 22:59:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.325 22:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.325 22:59:58 -- common/autotest_common.sh@10 -- # set +x 00:14:26.584 22:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.584 22:59:58 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:26.584 22:59:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.584 22:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.584 22:59:58 -- common/autotest_common.sh@10 -- # set +x 00:14:26.842 22:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.843 22:59:59 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:26.843 22:59:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.843 22:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.843 22:59:59 -- common/autotest_common.sh@10 -- # set +x 00:14:27.101 22:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:27.101 22:59:59 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:27.101 22:59:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.101 22:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:27.101 22:59:59 -- common/autotest_common.sh@10 -- # set +x 00:14:27.359 22:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:27.359 22:59:59 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:27.359 22:59:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.359 22:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:27.359 22:59:59 -- common/autotest_common.sh@10 -- # set +x 00:14:27.928 23:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:27.928 23:00:00 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:27.928 23:00:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.928 23:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:27.928 23:00:00 -- common/autotest_common.sh@10 -- # set +x 00:14:28.187 23:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.187 23:00:00 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:28.187 23:00:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.187 23:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.187 23:00:00 -- common/autotest_common.sh@10 -- # set +x 00:14:28.446 23:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.446 23:00:00 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:28.446 23:00:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.446 23:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.446 23:00:00 -- common/autotest_common.sh@10 -- # set +x 00:14:28.705 23:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.705 23:00:01 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:28.705 23:00:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.705 23:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.705 23:00:01 -- common/autotest_common.sh@10 -- # set +x 00:14:29.274 23:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.274 23:00:01 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:29.274 23:00:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.274 23:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.274 23:00:01 -- common/autotest_common.sh@10 -- # set +x 00:14:29.532 23:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.532 23:00:01 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:29.532 23:00:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.532 23:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.532 23:00:01 -- common/autotest_common.sh@10 -- # set +x 00:14:29.794 23:00:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.794 23:00:02 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:29.794 23:00:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.794 23:00:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.794 23:00:02 -- common/autotest_common.sh@10 -- # set +x 00:14:30.057 23:00:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.057 23:00:02 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:30.057 23:00:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.057 23:00:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.057 23:00:02 -- common/autotest_common.sh@10 -- # set +x 00:14:30.316 23:00:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.316 23:00:02 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:30.316 23:00:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.316 23:00:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.316 23:00:02 -- common/autotest_common.sh@10 -- # set +x 00:14:30.926 23:00:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.926 23:00:03 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:30.926 23:00:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.926 23:00:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.926 23:00:03 -- common/autotest_common.sh@10 -- # set +x 00:14:31.185 23:00:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.185 23:00:03 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:31.185 23:00:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.185 23:00:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.185 23:00:03 -- common/autotest_common.sh@10 -- # set +x 00:14:31.443 23:00:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.443 23:00:03 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:31.443 23:00:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.443 23:00:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.443 23:00:03 -- common/autotest_common.sh@10 -- # set +x 00:14:31.701 23:00:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.701 23:00:04 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:31.701 23:00:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.701 23:00:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.701 23:00:04 -- common/autotest_common.sh@10 -- # set +x 00:14:31.959 23:00:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.959 23:00:04 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:31.959 23:00:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.959 23:00:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.959 23:00:04 -- common/autotest_common.sh@10 -- # set +x 00:14:32.218 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:32.477 23:00:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.477 23:00:04 -- target/connect_stress.sh@34 -- # kill -0 3149814 00:14:32.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3149814) - No such process 00:14:32.477 23:00:04 -- target/connect_stress.sh@38 -- # wait 3149814 00:14:32.477 23:00:04 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:32.477 23:00:04 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:32.477 23:00:04 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:32.477 23:00:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:32.477 23:00:04 -- nvmf/common.sh@116 -- # sync 00:14:32.477 23:00:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:32.477 23:00:04 -- nvmf/common.sh@119 -- # set +e 00:14:32.477 23:00:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:32.477 23:00:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:32.477 rmmod nvme_tcp 00:14:32.477 rmmod nvme_fabrics 00:14:32.477 rmmod nvme_keyring 00:14:32.477 23:00:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:32.477 23:00:04 -- nvmf/common.sh@123 -- # set -e 00:14:32.477 23:00:04 -- nvmf/common.sh@124 -- # return 0 00:14:32.477 23:00:04 -- nvmf/common.sh@477 -- # '[' -n 3149735 ']' 00:14:32.477 23:00:04 -- nvmf/common.sh@478 -- # killprocess 3149735 00:14:32.478 23:00:04 -- common/autotest_common.sh@926 -- # '[' -z 3149735 ']' 00:14:32.478 23:00:04 -- common/autotest_common.sh@930 -- # kill -0 3149735 00:14:32.478 23:00:04 -- common/autotest_common.sh@931 -- # uname 00:14:32.478 23:00:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:32.478 23:00:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3149735 00:14:32.478 23:00:04 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:32.478 23:00:04 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:32.478 23:00:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3149735' 00:14:32.478 killing process with pid 3149735 00:14:32.478 23:00:04 -- common/autotest_common.sh@945 -- # kill 3149735 00:14:32.478 23:00:04 -- common/autotest_common.sh@950 -- # wait 3149735 00:14:32.736 23:00:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:32.736 23:00:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:32.736 23:00:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:32.736 23:00:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:32.736 23:00:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:32.736 23:00:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.736 23:00:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.736 23:00:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.640 23:00:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:34.640 00:14:34.640 real 0m20.359s 00:14:34.640 user 0m40.585s 00:14:34.640 sys 0m10.040s 00:14:34.640 23:00:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.640 23:00:07 -- common/autotest_common.sh@10 -- # set +x 00:14:34.640 ************************************ 00:14:34.640 END TEST nvmf_connect_stress 00:14:34.640 ************************************ 00:14:34.900 23:00:07 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:34.900 23:00:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:34.900 23:00:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:34.900 23:00:07 -- common/autotest_common.sh@10 -- # set +x 00:14:34.900 ************************************ 00:14:34.900 START TEST nvmf_fused_ordering 00:14:34.900 ************************************ 00:14:34.900 23:00:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:34.900 * Looking for test storage... 00:14:34.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:34.900 23:00:07 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:34.900 23:00:07 -- nvmf/common.sh@7 -- # uname -s 00:14:34.900 23:00:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.900 23:00:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.900 23:00:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.900 23:00:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.900 23:00:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.900 23:00:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.900 23:00:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.900 23:00:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.900 23:00:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.900 23:00:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.900 23:00:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:34.900 23:00:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:34.900 23:00:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.900 23:00:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.900 23:00:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:34.900 23:00:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:34.900 23:00:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.900 23:00:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.900 23:00:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.900 23:00:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.900 23:00:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.900 23:00:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.900 23:00:07 -- paths/export.sh@5 -- # export PATH 00:14:34.900 23:00:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.900 23:00:07 -- nvmf/common.sh@46 -- # : 0 00:14:34.900 23:00:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:34.900 23:00:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:34.900 23:00:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:34.900 23:00:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.900 23:00:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.900 23:00:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:34.900 23:00:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:34.900 23:00:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:34.900 23:00:07 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:34.900 23:00:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:34.900 23:00:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.900 23:00:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:34.900 23:00:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:34.901 23:00:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:34.901 23:00:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.901 23:00:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.901 23:00:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.901 23:00:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:34.901 23:00:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:34.901 23:00:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:34.901 23:00:07 -- common/autotest_common.sh@10 -- # set +x 00:14:41.468 23:00:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:41.468 23:00:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:41.468 23:00:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:41.468 23:00:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:41.468 23:00:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:41.468 23:00:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:41.468 23:00:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:41.468 23:00:12 -- nvmf/common.sh@294 -- # net_devs=() 00:14:41.468 23:00:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:41.468 23:00:12 -- nvmf/common.sh@295 -- # e810=() 00:14:41.468 23:00:12 -- nvmf/common.sh@295 -- # local -ga e810 00:14:41.468 23:00:12 -- nvmf/common.sh@296 -- # x722=() 00:14:41.468 23:00:12 -- nvmf/common.sh@296 -- # local -ga x722 00:14:41.468 23:00:12 -- nvmf/common.sh@297 -- # mlx=() 00:14:41.468 23:00:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:41.468 23:00:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:41.468 23:00:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:41.468 23:00:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:41.468 23:00:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:41.468 23:00:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:41.468 23:00:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:41.468 23:00:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:41.468 23:00:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:41.468 23:00:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:41.468 23:00:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:41.468 23:00:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:41.468 23:00:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:41.468 23:00:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:41.468 23:00:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:41.468 23:00:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:41.468 23:00:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:41.468 23:00:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:41.468 23:00:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:41.468 23:00:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:41.468 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:41.468 23:00:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:41.468 23:00:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:41.468 23:00:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.468 23:00:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.468 23:00:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:41.468 23:00:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:41.468 23:00:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:41.468 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:41.468 23:00:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:41.468 23:00:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:41.468 23:00:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.468 23:00:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.468 23:00:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:41.468 23:00:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:41.468 23:00:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:41.468 23:00:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:41.468 23:00:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:41.468 23:00:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.468 23:00:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:41.468 23:00:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.468 23:00:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:41.468 Found net devices under 0000:af:00.0: cvl_0_0 00:14:41.468 23:00:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.468 23:00:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:41.468 23:00:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.468 23:00:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:41.468 23:00:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.468 23:00:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:41.468 Found net devices under 0000:af:00.1: cvl_0_1 00:14:41.468 23:00:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.468 23:00:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:41.468 23:00:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:41.468 23:00:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:41.468 23:00:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:41.468 23:00:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:41.468 23:00:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.468 23:00:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.468 23:00:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:41.468 23:00:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:41.468 23:00:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:41.468 23:00:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:41.468 23:00:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:41.468 23:00:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:41.468 23:00:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.468 23:00:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:41.468 23:00:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:41.468 23:00:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:41.468 23:00:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:41.468 23:00:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:41.468 23:00:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:41.468 23:00:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:41.468 23:00:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:41.468 23:00:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:41.468 23:00:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:41.468 23:00:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:41.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:14:41.468 00:14:41.468 --- 10.0.0.2 ping statistics --- 00:14:41.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.468 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:14:41.468 23:00:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:41.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:14:41.468 00:14:41.468 --- 10.0.0.1 ping statistics --- 00:14:41.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.468 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:14:41.468 23:00:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.468 23:00:13 -- nvmf/common.sh@410 -- # return 0 00:14:41.468 23:00:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:41.468 23:00:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.468 23:00:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:41.468 23:00:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:41.468 23:00:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.468 23:00:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:41.468 23:00:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:41.468 23:00:13 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:41.468 23:00:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:41.468 23:00:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:41.468 23:00:13 -- common/autotest_common.sh@10 -- # set +x 00:14:41.468 23:00:13 -- nvmf/common.sh@469 -- # nvmfpid=3155667 00:14:41.468 23:00:13 -- nvmf/common.sh@470 -- # waitforlisten 3155667 00:14:41.468 23:00:13 -- common/autotest_common.sh@819 -- # '[' -z 3155667 ']' 00:14:41.468 23:00:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.468 23:00:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:41.468 23:00:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.468 23:00:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:41.468 23:00:13 -- common/autotest_common.sh@10 -- # set +x 00:14:41.468 23:00:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:41.468 [2024-07-24 23:00:13.254116] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:41.468 [2024-07-24 23:00:13.254168] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.468 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.468 [2024-07-24 23:00:13.330458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.468 [2024-07-24 23:00:13.369168] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:41.468 [2024-07-24 23:00:13.369272] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.468 [2024-07-24 23:00:13.369282] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.468 [2024-07-24 23:00:13.369291] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.468 [2024-07-24 23:00:13.369309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.727 23:00:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:41.727 23:00:14 -- common/autotest_common.sh@852 -- # return 0 00:14:41.727 23:00:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:41.727 23:00:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:41.727 23:00:14 -- common/autotest_common.sh@10 -- # set +x 00:14:41.727 23:00:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.727 23:00:14 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:41.727 23:00:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.727 23:00:14 -- common/autotest_common.sh@10 -- # set +x 00:14:41.727 [2024-07-24 23:00:14.063338] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.727 23:00:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.727 23:00:14 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:41.727 23:00:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.727 23:00:14 -- common/autotest_common.sh@10 -- # set +x 00:14:41.727 23:00:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.727 23:00:14 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:41.727 23:00:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.727 23:00:14 -- common/autotest_common.sh@10 -- # set +x 00:14:41.727 [2024-07-24 23:00:14.079495] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.727 23:00:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.727 23:00:14 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:41.727 23:00:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.727 23:00:14 -- common/autotest_common.sh@10 -- # set +x 00:14:41.727 NULL1 00:14:41.727 23:00:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.727 23:00:14 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:41.727 23:00:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.727 23:00:14 -- common/autotest_common.sh@10 -- # set +x 00:14:41.727 23:00:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.727 23:00:14 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:41.727 23:00:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.727 23:00:14 -- common/autotest_common.sh@10 -- # set +x 00:14:41.727 23:00:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.727 23:00:14 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:41.727 [2024-07-24 23:00:14.122245] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:41.727 [2024-07-24 23:00:14.122282] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3155909 ] 00:14:41.727 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.294 Attached to nqn.2016-06.io.spdk:cnode1 00:14:42.294 Namespace ID: 1 size: 1GB 00:14:42.294 fused_ordering(0) 00:14:42.294 fused_ordering(1) 00:14:42.294 fused_ordering(2) 00:14:42.294 fused_ordering(3) 00:14:42.294 fused_ordering(4) 00:14:42.294 fused_ordering(5) 00:14:42.294 fused_ordering(6) 00:14:42.294 fused_ordering(7) 00:14:42.294 fused_ordering(8) 00:14:42.294 fused_ordering(9) 00:14:42.294 fused_ordering(10) 00:14:42.294 fused_ordering(11) 00:14:42.294 fused_ordering(12) 00:14:42.294 fused_ordering(13) 00:14:42.294 fused_ordering(14) 00:14:42.294 fused_ordering(15) 00:14:42.294 fused_ordering(16) 00:14:42.294 fused_ordering(17) 00:14:42.294 fused_ordering(18) 00:14:42.294 fused_ordering(19) 00:14:42.294 fused_ordering(20) 00:14:42.294 fused_ordering(21) 00:14:42.294 fused_ordering(22) 00:14:42.294 fused_ordering(23) 00:14:42.294 fused_ordering(24) 00:14:42.294 fused_ordering(25) 00:14:42.294 fused_ordering(26) 00:14:42.294 fused_ordering(27) 00:14:42.294 fused_ordering(28) 00:14:42.294 fused_ordering(29) 00:14:42.294 fused_ordering(30) 00:14:42.294 fused_ordering(31) 00:14:42.294 fused_ordering(32) 00:14:42.294 fused_ordering(33) 00:14:42.294 fused_ordering(34) 00:14:42.294 fused_ordering(35) 00:14:42.294 fused_ordering(36) 00:14:42.294 fused_ordering(37) 00:14:42.294 fused_ordering(38) 00:14:42.294 fused_ordering(39) 00:14:42.294 fused_ordering(40) 00:14:42.294 fused_ordering(41) 00:14:42.294 fused_ordering(42) 00:14:42.294 fused_ordering(43) 00:14:42.294 fused_ordering(44) 00:14:42.294 fused_ordering(45) 00:14:42.294 fused_ordering(46) 00:14:42.294 fused_ordering(47) 00:14:42.294 fused_ordering(48) 00:14:42.294 fused_ordering(49) 00:14:42.294 fused_ordering(50) 00:14:42.294 fused_ordering(51) 00:14:42.294 fused_ordering(52) 00:14:42.294 fused_ordering(53) 00:14:42.294 fused_ordering(54) 00:14:42.294 fused_ordering(55) 00:14:42.294 fused_ordering(56) 00:14:42.294 fused_ordering(57) 00:14:42.294 fused_ordering(58) 00:14:42.294 fused_ordering(59) 00:14:42.294 fused_ordering(60) 00:14:42.294 fused_ordering(61) 00:14:42.294 fused_ordering(62) 00:14:42.294 fused_ordering(63) 00:14:42.294 fused_ordering(64) 00:14:42.294 fused_ordering(65) 00:14:42.294 fused_ordering(66) 00:14:42.294 fused_ordering(67) 00:14:42.294 fused_ordering(68) 00:14:42.294 fused_ordering(69) 00:14:42.294 fused_ordering(70) 00:14:42.294 fused_ordering(71) 00:14:42.294 fused_ordering(72) 00:14:42.294 fused_ordering(73) 00:14:42.294 fused_ordering(74) 00:14:42.294 fused_ordering(75) 00:14:42.294 fused_ordering(76) 00:14:42.294 fused_ordering(77) 00:14:42.294 fused_ordering(78) 00:14:42.294 fused_ordering(79) 00:14:42.294 fused_ordering(80) 00:14:42.294 fused_ordering(81) 00:14:42.294 fused_ordering(82) 00:14:42.294 fused_ordering(83) 00:14:42.294 fused_ordering(84) 00:14:42.294 fused_ordering(85) 00:14:42.294 fused_ordering(86) 00:14:42.294 fused_ordering(87) 00:14:42.294 fused_ordering(88) 00:14:42.294 fused_ordering(89) 00:14:42.294 fused_ordering(90) 00:14:42.294 fused_ordering(91) 00:14:42.294 fused_ordering(92) 00:14:42.294 fused_ordering(93) 00:14:42.294 fused_ordering(94) 00:14:42.294 fused_ordering(95) 00:14:42.294 fused_ordering(96) 00:14:42.294 fused_ordering(97) 00:14:42.294 fused_ordering(98) 00:14:42.294 fused_ordering(99) 00:14:42.294 fused_ordering(100) 00:14:42.294 fused_ordering(101) 00:14:42.294 fused_ordering(102) 00:14:42.294 fused_ordering(103) 00:14:42.294 fused_ordering(104) 00:14:42.294 fused_ordering(105) 00:14:42.294 fused_ordering(106) 00:14:42.294 fused_ordering(107) 00:14:42.294 fused_ordering(108) 00:14:42.294 fused_ordering(109) 00:14:42.294 fused_ordering(110) 00:14:42.294 fused_ordering(111) 00:14:42.294 fused_ordering(112) 00:14:42.294 fused_ordering(113) 00:14:42.294 fused_ordering(114) 00:14:42.294 fused_ordering(115) 00:14:42.294 fused_ordering(116) 00:14:42.294 fused_ordering(117) 00:14:42.294 fused_ordering(118) 00:14:42.295 fused_ordering(119) 00:14:42.295 fused_ordering(120) 00:14:42.295 fused_ordering(121) 00:14:42.295 fused_ordering(122) 00:14:42.295 fused_ordering(123) 00:14:42.295 fused_ordering(124) 00:14:42.295 fused_ordering(125) 00:14:42.295 fused_ordering(126) 00:14:42.295 fused_ordering(127) 00:14:42.295 fused_ordering(128) 00:14:42.295 fused_ordering(129) 00:14:42.295 fused_ordering(130) 00:14:42.295 fused_ordering(131) 00:14:42.295 fused_ordering(132) 00:14:42.295 fused_ordering(133) 00:14:42.295 fused_ordering(134) 00:14:42.295 fused_ordering(135) 00:14:42.295 fused_ordering(136) 00:14:42.295 fused_ordering(137) 00:14:42.295 fused_ordering(138) 00:14:42.295 fused_ordering(139) 00:14:42.295 fused_ordering(140) 00:14:42.295 fused_ordering(141) 00:14:42.295 fused_ordering(142) 00:14:42.295 fused_ordering(143) 00:14:42.295 fused_ordering(144) 00:14:42.295 fused_ordering(145) 00:14:42.295 fused_ordering(146) 00:14:42.295 fused_ordering(147) 00:14:42.295 fused_ordering(148) 00:14:42.295 fused_ordering(149) 00:14:42.295 fused_ordering(150) 00:14:42.295 fused_ordering(151) 00:14:42.295 fused_ordering(152) 00:14:42.295 fused_ordering(153) 00:14:42.295 fused_ordering(154) 00:14:42.295 fused_ordering(155) 00:14:42.295 fused_ordering(156) 00:14:42.295 fused_ordering(157) 00:14:42.295 fused_ordering(158) 00:14:42.295 fused_ordering(159) 00:14:42.295 fused_ordering(160) 00:14:42.295 fused_ordering(161) 00:14:42.295 fused_ordering(162) 00:14:42.295 fused_ordering(163) 00:14:42.295 fused_ordering(164) 00:14:42.295 fused_ordering(165) 00:14:42.295 fused_ordering(166) 00:14:42.295 fused_ordering(167) 00:14:42.295 fused_ordering(168) 00:14:42.295 fused_ordering(169) 00:14:42.295 fused_ordering(170) 00:14:42.295 fused_ordering(171) 00:14:42.295 fused_ordering(172) 00:14:42.295 fused_ordering(173) 00:14:42.295 fused_ordering(174) 00:14:42.295 fused_ordering(175) 00:14:42.295 fused_ordering(176) 00:14:42.295 fused_ordering(177) 00:14:42.295 fused_ordering(178) 00:14:42.295 fused_ordering(179) 00:14:42.295 fused_ordering(180) 00:14:42.295 fused_ordering(181) 00:14:42.295 fused_ordering(182) 00:14:42.295 fused_ordering(183) 00:14:42.295 fused_ordering(184) 00:14:42.295 fused_ordering(185) 00:14:42.295 fused_ordering(186) 00:14:42.295 fused_ordering(187) 00:14:42.295 fused_ordering(188) 00:14:42.295 fused_ordering(189) 00:14:42.295 fused_ordering(190) 00:14:42.295 fused_ordering(191) 00:14:42.295 fused_ordering(192) 00:14:42.295 fused_ordering(193) 00:14:42.295 fused_ordering(194) 00:14:42.295 fused_ordering(195) 00:14:42.295 fused_ordering(196) 00:14:42.295 fused_ordering(197) 00:14:42.295 fused_ordering(198) 00:14:42.295 fused_ordering(199) 00:14:42.295 fused_ordering(200) 00:14:42.295 fused_ordering(201) 00:14:42.295 fused_ordering(202) 00:14:42.295 fused_ordering(203) 00:14:42.295 fused_ordering(204) 00:14:42.295 fused_ordering(205) 00:14:42.862 fused_ordering(206) 00:14:42.862 fused_ordering(207) 00:14:42.862 fused_ordering(208) 00:14:42.862 fused_ordering(209) 00:14:42.862 fused_ordering(210) 00:14:42.862 fused_ordering(211) 00:14:42.862 fused_ordering(212) 00:14:42.862 fused_ordering(213) 00:14:42.862 fused_ordering(214) 00:14:42.862 fused_ordering(215) 00:14:42.862 fused_ordering(216) 00:14:42.862 fused_ordering(217) 00:14:42.862 fused_ordering(218) 00:14:42.862 fused_ordering(219) 00:14:42.862 fused_ordering(220) 00:14:42.862 fused_ordering(221) 00:14:42.862 fused_ordering(222) 00:14:42.862 fused_ordering(223) 00:14:42.862 fused_ordering(224) 00:14:42.862 fused_ordering(225) 00:14:42.862 fused_ordering(226) 00:14:42.862 fused_ordering(227) 00:14:42.862 fused_ordering(228) 00:14:42.862 fused_ordering(229) 00:14:42.862 fused_ordering(230) 00:14:42.862 fused_ordering(231) 00:14:42.862 fused_ordering(232) 00:14:42.862 fused_ordering(233) 00:14:42.862 fused_ordering(234) 00:14:42.862 fused_ordering(235) 00:14:42.862 fused_ordering(236) 00:14:42.862 fused_ordering(237) 00:14:42.862 fused_ordering(238) 00:14:42.862 fused_ordering(239) 00:14:42.862 fused_ordering(240) 00:14:42.862 fused_ordering(241) 00:14:42.862 fused_ordering(242) 00:14:42.862 fused_ordering(243) 00:14:42.862 fused_ordering(244) 00:14:42.862 fused_ordering(245) 00:14:42.862 fused_ordering(246) 00:14:42.862 fused_ordering(247) 00:14:42.862 fused_ordering(248) 00:14:42.862 fused_ordering(249) 00:14:42.862 fused_ordering(250) 00:14:42.862 fused_ordering(251) 00:14:42.862 fused_ordering(252) 00:14:42.862 fused_ordering(253) 00:14:42.862 fused_ordering(254) 00:14:42.862 fused_ordering(255) 00:14:42.862 fused_ordering(256) 00:14:42.862 fused_ordering(257) 00:14:42.862 fused_ordering(258) 00:14:42.862 fused_ordering(259) 00:14:42.862 fused_ordering(260) 00:14:42.862 fused_ordering(261) 00:14:42.862 fused_ordering(262) 00:14:42.862 fused_ordering(263) 00:14:42.862 fused_ordering(264) 00:14:42.862 fused_ordering(265) 00:14:42.862 fused_ordering(266) 00:14:42.862 fused_ordering(267) 00:14:42.862 fused_ordering(268) 00:14:42.862 fused_ordering(269) 00:14:42.862 fused_ordering(270) 00:14:42.862 fused_ordering(271) 00:14:42.862 fused_ordering(272) 00:14:42.862 fused_ordering(273) 00:14:42.862 fused_ordering(274) 00:14:42.862 fused_ordering(275) 00:14:42.862 fused_ordering(276) 00:14:42.862 fused_ordering(277) 00:14:42.862 fused_ordering(278) 00:14:42.862 fused_ordering(279) 00:14:42.862 fused_ordering(280) 00:14:42.862 fused_ordering(281) 00:14:42.862 fused_ordering(282) 00:14:42.862 fused_ordering(283) 00:14:42.862 fused_ordering(284) 00:14:42.862 fused_ordering(285) 00:14:42.862 fused_ordering(286) 00:14:42.862 fused_ordering(287) 00:14:42.862 fused_ordering(288) 00:14:42.862 fused_ordering(289) 00:14:42.862 fused_ordering(290) 00:14:42.862 fused_ordering(291) 00:14:42.862 fused_ordering(292) 00:14:42.862 fused_ordering(293) 00:14:42.862 fused_ordering(294) 00:14:42.862 fused_ordering(295) 00:14:42.862 fused_ordering(296) 00:14:42.862 fused_ordering(297) 00:14:42.862 fused_ordering(298) 00:14:42.862 fused_ordering(299) 00:14:42.862 fused_ordering(300) 00:14:42.862 fused_ordering(301) 00:14:42.862 fused_ordering(302) 00:14:42.862 fused_ordering(303) 00:14:42.862 fused_ordering(304) 00:14:42.862 fused_ordering(305) 00:14:42.862 fused_ordering(306) 00:14:42.862 fused_ordering(307) 00:14:42.862 fused_ordering(308) 00:14:42.862 fused_ordering(309) 00:14:42.862 fused_ordering(310) 00:14:42.862 fused_ordering(311) 00:14:42.862 fused_ordering(312) 00:14:42.862 fused_ordering(313) 00:14:42.862 fused_ordering(314) 00:14:42.862 fused_ordering(315) 00:14:42.862 fused_ordering(316) 00:14:42.862 fused_ordering(317) 00:14:42.862 fused_ordering(318) 00:14:42.862 fused_ordering(319) 00:14:42.862 fused_ordering(320) 00:14:42.862 fused_ordering(321) 00:14:42.862 fused_ordering(322) 00:14:42.862 fused_ordering(323) 00:14:42.862 fused_ordering(324) 00:14:42.862 fused_ordering(325) 00:14:42.862 fused_ordering(326) 00:14:42.862 fused_ordering(327) 00:14:42.863 fused_ordering(328) 00:14:42.863 fused_ordering(329) 00:14:42.863 fused_ordering(330) 00:14:42.863 fused_ordering(331) 00:14:42.863 fused_ordering(332) 00:14:42.863 fused_ordering(333) 00:14:42.863 fused_ordering(334) 00:14:42.863 fused_ordering(335) 00:14:42.863 fused_ordering(336) 00:14:42.863 fused_ordering(337) 00:14:42.863 fused_ordering(338) 00:14:42.863 fused_ordering(339) 00:14:42.863 fused_ordering(340) 00:14:42.863 fused_ordering(341) 00:14:42.863 fused_ordering(342) 00:14:42.863 fused_ordering(343) 00:14:42.863 fused_ordering(344) 00:14:42.863 fused_ordering(345) 00:14:42.863 fused_ordering(346) 00:14:42.863 fused_ordering(347) 00:14:42.863 fused_ordering(348) 00:14:42.863 fused_ordering(349) 00:14:42.863 fused_ordering(350) 00:14:42.863 fused_ordering(351) 00:14:42.863 fused_ordering(352) 00:14:42.863 fused_ordering(353) 00:14:42.863 fused_ordering(354) 00:14:42.863 fused_ordering(355) 00:14:42.863 fused_ordering(356) 00:14:42.863 fused_ordering(357) 00:14:42.863 fused_ordering(358) 00:14:42.863 fused_ordering(359) 00:14:42.863 fused_ordering(360) 00:14:42.863 fused_ordering(361) 00:14:42.863 fused_ordering(362) 00:14:42.863 fused_ordering(363) 00:14:42.863 fused_ordering(364) 00:14:42.863 fused_ordering(365) 00:14:42.863 fused_ordering(366) 00:14:42.863 fused_ordering(367) 00:14:42.863 fused_ordering(368) 00:14:42.863 fused_ordering(369) 00:14:42.863 fused_ordering(370) 00:14:42.863 fused_ordering(371) 00:14:42.863 fused_ordering(372) 00:14:42.863 fused_ordering(373) 00:14:42.863 fused_ordering(374) 00:14:42.863 fused_ordering(375) 00:14:42.863 fused_ordering(376) 00:14:42.863 fused_ordering(377) 00:14:42.863 fused_ordering(378) 00:14:42.863 fused_ordering(379) 00:14:42.863 fused_ordering(380) 00:14:42.863 fused_ordering(381) 00:14:42.863 fused_ordering(382) 00:14:42.863 fused_ordering(383) 00:14:42.863 fused_ordering(384) 00:14:42.863 fused_ordering(385) 00:14:42.863 fused_ordering(386) 00:14:42.863 fused_ordering(387) 00:14:42.863 fused_ordering(388) 00:14:42.863 fused_ordering(389) 00:14:42.863 fused_ordering(390) 00:14:42.863 fused_ordering(391) 00:14:42.863 fused_ordering(392) 00:14:42.863 fused_ordering(393) 00:14:42.863 fused_ordering(394) 00:14:42.863 fused_ordering(395) 00:14:42.863 fused_ordering(396) 00:14:42.863 fused_ordering(397) 00:14:42.863 fused_ordering(398) 00:14:42.863 fused_ordering(399) 00:14:42.863 fused_ordering(400) 00:14:42.863 fused_ordering(401) 00:14:42.863 fused_ordering(402) 00:14:42.863 fused_ordering(403) 00:14:42.863 fused_ordering(404) 00:14:42.863 fused_ordering(405) 00:14:42.863 fused_ordering(406) 00:14:42.863 fused_ordering(407) 00:14:42.863 fused_ordering(408) 00:14:42.863 fused_ordering(409) 00:14:42.863 fused_ordering(410) 00:14:43.123 fused_ordering(411) 00:14:43.123 fused_ordering(412) 00:14:43.123 fused_ordering(413) 00:14:43.123 fused_ordering(414) 00:14:43.123 fused_ordering(415) 00:14:43.123 fused_ordering(416) 00:14:43.123 fused_ordering(417) 00:14:43.123 fused_ordering(418) 00:14:43.123 fused_ordering(419) 00:14:43.123 fused_ordering(420) 00:14:43.123 fused_ordering(421) 00:14:43.123 fused_ordering(422) 00:14:43.123 fused_ordering(423) 00:14:43.123 fused_ordering(424) 00:14:43.123 fused_ordering(425) 00:14:43.123 fused_ordering(426) 00:14:43.123 fused_ordering(427) 00:14:43.123 fused_ordering(428) 00:14:43.123 fused_ordering(429) 00:14:43.123 fused_ordering(430) 00:14:43.123 fused_ordering(431) 00:14:43.123 fused_ordering(432) 00:14:43.123 fused_ordering(433) 00:14:43.123 fused_ordering(434) 00:14:43.123 fused_ordering(435) 00:14:43.123 fused_ordering(436) 00:14:43.123 fused_ordering(437) 00:14:43.123 fused_ordering(438) 00:14:43.123 fused_ordering(439) 00:14:43.123 fused_ordering(440) 00:14:43.123 fused_ordering(441) 00:14:43.123 fused_ordering(442) 00:14:43.123 fused_ordering(443) 00:14:43.123 fused_ordering(444) 00:14:43.123 fused_ordering(445) 00:14:43.123 fused_ordering(446) 00:14:43.123 fused_ordering(447) 00:14:43.123 fused_ordering(448) 00:14:43.123 fused_ordering(449) 00:14:43.123 fused_ordering(450) 00:14:43.123 fused_ordering(451) 00:14:43.123 fused_ordering(452) 00:14:43.123 fused_ordering(453) 00:14:43.123 fused_ordering(454) 00:14:43.123 fused_ordering(455) 00:14:43.123 fused_ordering(456) 00:14:43.123 fused_ordering(457) 00:14:43.123 fused_ordering(458) 00:14:43.123 fused_ordering(459) 00:14:43.123 fused_ordering(460) 00:14:43.123 fused_ordering(461) 00:14:43.123 fused_ordering(462) 00:14:43.123 fused_ordering(463) 00:14:43.123 fused_ordering(464) 00:14:43.123 fused_ordering(465) 00:14:43.123 fused_ordering(466) 00:14:43.123 fused_ordering(467) 00:14:43.123 fused_ordering(468) 00:14:43.123 fused_ordering(469) 00:14:43.123 fused_ordering(470) 00:14:43.123 fused_ordering(471) 00:14:43.123 fused_ordering(472) 00:14:43.123 fused_ordering(473) 00:14:43.123 fused_ordering(474) 00:14:43.123 fused_ordering(475) 00:14:43.123 fused_ordering(476) 00:14:43.123 fused_ordering(477) 00:14:43.123 fused_ordering(478) 00:14:43.123 fused_ordering(479) 00:14:43.123 fused_ordering(480) 00:14:43.123 fused_ordering(481) 00:14:43.123 fused_ordering(482) 00:14:43.123 fused_ordering(483) 00:14:43.123 fused_ordering(484) 00:14:43.123 fused_ordering(485) 00:14:43.123 fused_ordering(486) 00:14:43.123 fused_ordering(487) 00:14:43.123 fused_ordering(488) 00:14:43.123 fused_ordering(489) 00:14:43.123 fused_ordering(490) 00:14:43.123 fused_ordering(491) 00:14:43.123 fused_ordering(492) 00:14:43.123 fused_ordering(493) 00:14:43.123 fused_ordering(494) 00:14:43.123 fused_ordering(495) 00:14:43.123 fused_ordering(496) 00:14:43.123 fused_ordering(497) 00:14:43.123 fused_ordering(498) 00:14:43.123 fused_ordering(499) 00:14:43.123 fused_ordering(500) 00:14:43.123 fused_ordering(501) 00:14:43.123 fused_ordering(502) 00:14:43.123 fused_ordering(503) 00:14:43.123 fused_ordering(504) 00:14:43.123 fused_ordering(505) 00:14:43.123 fused_ordering(506) 00:14:43.123 fused_ordering(507) 00:14:43.123 fused_ordering(508) 00:14:43.123 fused_ordering(509) 00:14:43.123 fused_ordering(510) 00:14:43.123 fused_ordering(511) 00:14:43.123 fused_ordering(512) 00:14:43.123 fused_ordering(513) 00:14:43.123 fused_ordering(514) 00:14:43.123 fused_ordering(515) 00:14:43.123 fused_ordering(516) 00:14:43.123 fused_ordering(517) 00:14:43.123 fused_ordering(518) 00:14:43.123 fused_ordering(519) 00:14:43.123 fused_ordering(520) 00:14:43.123 fused_ordering(521) 00:14:43.123 fused_ordering(522) 00:14:43.123 fused_ordering(523) 00:14:43.123 fused_ordering(524) 00:14:43.123 fused_ordering(525) 00:14:43.123 fused_ordering(526) 00:14:43.123 fused_ordering(527) 00:14:43.123 fused_ordering(528) 00:14:43.123 fused_ordering(529) 00:14:43.123 fused_ordering(530) 00:14:43.123 fused_ordering(531) 00:14:43.123 fused_ordering(532) 00:14:43.123 fused_ordering(533) 00:14:43.123 fused_ordering(534) 00:14:43.123 fused_ordering(535) 00:14:43.123 fused_ordering(536) 00:14:43.123 fused_ordering(537) 00:14:43.123 fused_ordering(538) 00:14:43.123 fused_ordering(539) 00:14:43.123 fused_ordering(540) 00:14:43.123 fused_ordering(541) 00:14:43.123 fused_ordering(542) 00:14:43.123 fused_ordering(543) 00:14:43.123 fused_ordering(544) 00:14:43.123 fused_ordering(545) 00:14:43.123 fused_ordering(546) 00:14:43.123 fused_ordering(547) 00:14:43.123 fused_ordering(548) 00:14:43.123 fused_ordering(549) 00:14:43.123 fused_ordering(550) 00:14:43.123 fused_ordering(551) 00:14:43.123 fused_ordering(552) 00:14:43.123 fused_ordering(553) 00:14:43.123 fused_ordering(554) 00:14:43.123 fused_ordering(555) 00:14:43.123 fused_ordering(556) 00:14:43.123 fused_ordering(557) 00:14:43.123 fused_ordering(558) 00:14:43.123 fused_ordering(559) 00:14:43.123 fused_ordering(560) 00:14:43.123 fused_ordering(561) 00:14:43.123 fused_ordering(562) 00:14:43.123 fused_ordering(563) 00:14:43.123 fused_ordering(564) 00:14:43.123 fused_ordering(565) 00:14:43.123 fused_ordering(566) 00:14:43.123 fused_ordering(567) 00:14:43.123 fused_ordering(568) 00:14:43.123 fused_ordering(569) 00:14:43.123 fused_ordering(570) 00:14:43.123 fused_ordering(571) 00:14:43.123 fused_ordering(572) 00:14:43.123 fused_ordering(573) 00:14:43.123 fused_ordering(574) 00:14:43.123 fused_ordering(575) 00:14:43.123 fused_ordering(576) 00:14:43.123 fused_ordering(577) 00:14:43.123 fused_ordering(578) 00:14:43.123 fused_ordering(579) 00:14:43.123 fused_ordering(580) 00:14:43.123 fused_ordering(581) 00:14:43.123 fused_ordering(582) 00:14:43.123 fused_ordering(583) 00:14:43.123 fused_ordering(584) 00:14:43.123 fused_ordering(585) 00:14:43.123 fused_ordering(586) 00:14:43.123 fused_ordering(587) 00:14:43.123 fused_ordering(588) 00:14:43.123 fused_ordering(589) 00:14:43.123 fused_ordering(590) 00:14:43.123 fused_ordering(591) 00:14:43.123 fused_ordering(592) 00:14:43.123 fused_ordering(593) 00:14:43.123 fused_ordering(594) 00:14:43.123 fused_ordering(595) 00:14:43.123 fused_ordering(596) 00:14:43.123 fused_ordering(597) 00:14:43.123 fused_ordering(598) 00:14:43.123 fused_ordering(599) 00:14:43.123 fused_ordering(600) 00:14:43.123 fused_ordering(601) 00:14:43.123 fused_ordering(602) 00:14:43.123 fused_ordering(603) 00:14:43.123 fused_ordering(604) 00:14:43.123 fused_ordering(605) 00:14:43.123 fused_ordering(606) 00:14:43.123 fused_ordering(607) 00:14:43.123 fused_ordering(608) 00:14:43.123 fused_ordering(609) 00:14:43.123 fused_ordering(610) 00:14:43.123 fused_ordering(611) 00:14:43.123 fused_ordering(612) 00:14:43.123 fused_ordering(613) 00:14:43.123 fused_ordering(614) 00:14:43.123 fused_ordering(615) 00:14:43.691 fused_ordering(616) 00:14:43.691 fused_ordering(617) 00:14:43.691 fused_ordering(618) 00:14:43.691 fused_ordering(619) 00:14:43.691 fused_ordering(620) 00:14:43.691 fused_ordering(621) 00:14:43.691 fused_ordering(622) 00:14:43.691 fused_ordering(623) 00:14:43.691 fused_ordering(624) 00:14:43.691 fused_ordering(625) 00:14:43.691 fused_ordering(626) 00:14:43.691 fused_ordering(627) 00:14:43.691 fused_ordering(628) 00:14:43.691 fused_ordering(629) 00:14:43.691 fused_ordering(630) 00:14:43.691 fused_ordering(631) 00:14:43.691 fused_ordering(632) 00:14:43.691 fused_ordering(633) 00:14:43.691 fused_ordering(634) 00:14:43.691 fused_ordering(635) 00:14:43.691 fused_ordering(636) 00:14:43.691 fused_ordering(637) 00:14:43.691 fused_ordering(638) 00:14:43.691 fused_ordering(639) 00:14:43.691 fused_ordering(640) 00:14:43.691 fused_ordering(641) 00:14:43.691 fused_ordering(642) 00:14:43.691 fused_ordering(643) 00:14:43.691 fused_ordering(644) 00:14:43.691 fused_ordering(645) 00:14:43.691 fused_ordering(646) 00:14:43.691 fused_ordering(647) 00:14:43.691 fused_ordering(648) 00:14:43.691 fused_ordering(649) 00:14:43.691 fused_ordering(650) 00:14:43.691 fused_ordering(651) 00:14:43.691 fused_ordering(652) 00:14:43.691 fused_ordering(653) 00:14:43.691 fused_ordering(654) 00:14:43.691 fused_ordering(655) 00:14:43.691 fused_ordering(656) 00:14:43.691 fused_ordering(657) 00:14:43.691 fused_ordering(658) 00:14:43.691 fused_ordering(659) 00:14:43.691 fused_ordering(660) 00:14:43.691 fused_ordering(661) 00:14:43.691 fused_ordering(662) 00:14:43.691 fused_ordering(663) 00:14:43.691 fused_ordering(664) 00:14:43.691 fused_ordering(665) 00:14:43.691 fused_ordering(666) 00:14:43.691 fused_ordering(667) 00:14:43.691 fused_ordering(668) 00:14:43.691 fused_ordering(669) 00:14:43.691 fused_ordering(670) 00:14:43.691 fused_ordering(671) 00:14:43.691 fused_ordering(672) 00:14:43.691 fused_ordering(673) 00:14:43.691 fused_ordering(674) 00:14:43.691 fused_ordering(675) 00:14:43.691 fused_ordering(676) 00:14:43.691 fused_ordering(677) 00:14:43.691 fused_ordering(678) 00:14:43.691 fused_ordering(679) 00:14:43.691 fused_ordering(680) 00:14:43.691 fused_ordering(681) 00:14:43.691 fused_ordering(682) 00:14:43.691 fused_ordering(683) 00:14:43.691 fused_ordering(684) 00:14:43.691 fused_ordering(685) 00:14:43.691 fused_ordering(686) 00:14:43.691 fused_ordering(687) 00:14:43.691 fused_ordering(688) 00:14:43.691 fused_ordering(689) 00:14:43.691 fused_ordering(690) 00:14:43.691 fused_ordering(691) 00:14:43.691 fused_ordering(692) 00:14:43.691 fused_ordering(693) 00:14:43.691 fused_ordering(694) 00:14:43.691 fused_ordering(695) 00:14:43.691 fused_ordering(696) 00:14:43.691 fused_ordering(697) 00:14:43.691 fused_ordering(698) 00:14:43.691 fused_ordering(699) 00:14:43.691 fused_ordering(700) 00:14:43.691 fused_ordering(701) 00:14:43.692 fused_ordering(702) 00:14:43.692 fused_ordering(703) 00:14:43.692 fused_ordering(704) 00:14:43.692 fused_ordering(705) 00:14:43.692 fused_ordering(706) 00:14:43.692 fused_ordering(707) 00:14:43.692 fused_ordering(708) 00:14:43.692 fused_ordering(709) 00:14:43.692 fused_ordering(710) 00:14:43.692 fused_ordering(711) 00:14:43.692 fused_ordering(712) 00:14:43.692 fused_ordering(713) 00:14:43.692 fused_ordering(714) 00:14:43.692 fused_ordering(715) 00:14:43.692 fused_ordering(716) 00:14:43.692 fused_ordering(717) 00:14:43.692 fused_ordering(718) 00:14:43.692 fused_ordering(719) 00:14:43.692 fused_ordering(720) 00:14:43.692 fused_ordering(721) 00:14:43.692 fused_ordering(722) 00:14:43.692 fused_ordering(723) 00:14:43.692 fused_ordering(724) 00:14:43.692 fused_ordering(725) 00:14:43.692 fused_ordering(726) 00:14:43.692 fused_ordering(727) 00:14:43.692 fused_ordering(728) 00:14:43.692 fused_ordering(729) 00:14:43.692 fused_ordering(730) 00:14:43.692 fused_ordering(731) 00:14:43.692 fused_ordering(732) 00:14:43.692 fused_ordering(733) 00:14:43.692 fused_ordering(734) 00:14:43.692 fused_ordering(735) 00:14:43.692 fused_ordering(736) 00:14:43.692 fused_ordering(737) 00:14:43.692 fused_ordering(738) 00:14:43.692 fused_ordering(739) 00:14:43.692 fused_ordering(740) 00:14:43.692 fused_ordering(741) 00:14:43.692 fused_ordering(742) 00:14:43.692 fused_ordering(743) 00:14:43.692 fused_ordering(744) 00:14:43.692 fused_ordering(745) 00:14:43.692 fused_ordering(746) 00:14:43.692 fused_ordering(747) 00:14:43.692 fused_ordering(748) 00:14:43.692 fused_ordering(749) 00:14:43.692 fused_ordering(750) 00:14:43.692 fused_ordering(751) 00:14:43.692 fused_ordering(752) 00:14:43.692 fused_ordering(753) 00:14:43.692 fused_ordering(754) 00:14:43.692 fused_ordering(755) 00:14:43.692 fused_ordering(756) 00:14:43.692 fused_ordering(757) 00:14:43.692 fused_ordering(758) 00:14:43.692 fused_ordering(759) 00:14:43.692 fused_ordering(760) 00:14:43.692 fused_ordering(761) 00:14:43.692 fused_ordering(762) 00:14:43.692 fused_ordering(763) 00:14:43.692 fused_ordering(764) 00:14:43.692 fused_ordering(765) 00:14:43.692 fused_ordering(766) 00:14:43.692 fused_ordering(767) 00:14:43.692 fused_ordering(768) 00:14:43.692 fused_ordering(769) 00:14:43.692 fused_ordering(770) 00:14:43.692 fused_ordering(771) 00:14:43.692 fused_ordering(772) 00:14:43.692 fused_ordering(773) 00:14:43.692 fused_ordering(774) 00:14:43.692 fused_ordering(775) 00:14:43.692 fused_ordering(776) 00:14:43.692 fused_ordering(777) 00:14:43.692 fused_ordering(778) 00:14:43.692 fused_ordering(779) 00:14:43.692 fused_ordering(780) 00:14:43.692 fused_ordering(781) 00:14:43.692 fused_ordering(782) 00:14:43.692 fused_ordering(783) 00:14:43.692 fused_ordering(784) 00:14:43.692 fused_ordering(785) 00:14:43.692 fused_ordering(786) 00:14:43.692 fused_ordering(787) 00:14:43.692 fused_ordering(788) 00:14:43.692 fused_ordering(789) 00:14:43.692 fused_ordering(790) 00:14:43.692 fused_ordering(791) 00:14:43.692 fused_ordering(792) 00:14:43.692 fused_ordering(793) 00:14:43.692 fused_ordering(794) 00:14:43.692 fused_ordering(795) 00:14:43.692 fused_ordering(796) 00:14:43.692 fused_ordering(797) 00:14:43.692 fused_ordering(798) 00:14:43.692 fused_ordering(799) 00:14:43.692 fused_ordering(800) 00:14:43.692 fused_ordering(801) 00:14:43.692 fused_ordering(802) 00:14:43.692 fused_ordering(803) 00:14:43.692 fused_ordering(804) 00:14:43.692 fused_ordering(805) 00:14:43.692 fused_ordering(806) 00:14:43.692 fused_ordering(807) 00:14:43.692 fused_ordering(808) 00:14:43.692 fused_ordering(809) 00:14:43.692 fused_ordering(810) 00:14:43.692 fused_ordering(811) 00:14:43.692 fused_ordering(812) 00:14:43.692 fused_ordering(813) 00:14:43.692 fused_ordering(814) 00:14:43.692 fused_ordering(815) 00:14:43.692 fused_ordering(816) 00:14:43.692 fused_ordering(817) 00:14:43.692 fused_ordering(818) 00:14:43.692 fused_ordering(819) 00:14:43.692 fused_ordering(820) 00:14:44.268 fused_ordering(821) 00:14:44.268 fused_ordering(822) 00:14:44.268 fused_ordering(823) 00:14:44.268 fused_ordering(824) 00:14:44.268 fused_ordering(825) 00:14:44.268 fused_ordering(826) 00:14:44.268 fused_ordering(827) 00:14:44.268 fused_ordering(828) 00:14:44.268 fused_ordering(829) 00:14:44.268 fused_ordering(830) 00:14:44.268 fused_ordering(831) 00:14:44.268 fused_ordering(832) 00:14:44.268 fused_ordering(833) 00:14:44.268 fused_ordering(834) 00:14:44.268 fused_ordering(835) 00:14:44.268 fused_ordering(836) 00:14:44.268 fused_ordering(837) 00:14:44.268 fused_ordering(838) 00:14:44.268 fused_ordering(839) 00:14:44.268 fused_ordering(840) 00:14:44.268 fused_ordering(841) 00:14:44.268 fused_ordering(842) 00:14:44.268 fused_ordering(843) 00:14:44.268 fused_ordering(844) 00:14:44.268 fused_ordering(845) 00:14:44.268 fused_ordering(846) 00:14:44.268 fused_ordering(847) 00:14:44.268 fused_ordering(848) 00:14:44.268 fused_ordering(849) 00:14:44.268 fused_ordering(850) 00:14:44.268 fused_ordering(851) 00:14:44.268 fused_ordering(852) 00:14:44.268 fused_ordering(853) 00:14:44.268 fused_ordering(854) 00:14:44.268 fused_ordering(855) 00:14:44.268 fused_ordering(856) 00:14:44.268 fused_ordering(857) 00:14:44.268 fused_ordering(858) 00:14:44.268 fused_ordering(859) 00:14:44.268 fused_ordering(860) 00:14:44.268 fused_ordering(861) 00:14:44.268 fused_ordering(862) 00:14:44.268 fused_ordering(863) 00:14:44.268 fused_ordering(864) 00:14:44.268 fused_ordering(865) 00:14:44.268 fused_ordering(866) 00:14:44.268 fused_ordering(867) 00:14:44.268 fused_ordering(868) 00:14:44.268 fused_ordering(869) 00:14:44.268 fused_ordering(870) 00:14:44.268 fused_ordering(871) 00:14:44.268 fused_ordering(872) 00:14:44.268 fused_ordering(873) 00:14:44.268 fused_ordering(874) 00:14:44.268 fused_ordering(875) 00:14:44.268 fused_ordering(876) 00:14:44.268 fused_ordering(877) 00:14:44.268 fused_ordering(878) 00:14:44.268 fused_ordering(879) 00:14:44.268 fused_ordering(880) 00:14:44.268 fused_ordering(881) 00:14:44.268 fused_ordering(882) 00:14:44.268 fused_ordering(883) 00:14:44.268 fused_ordering(884) 00:14:44.268 fused_ordering(885) 00:14:44.268 fused_ordering(886) 00:14:44.268 fused_ordering(887) 00:14:44.268 fused_ordering(888) 00:14:44.268 fused_ordering(889) 00:14:44.268 fused_ordering(890) 00:14:44.268 fused_ordering(891) 00:14:44.268 fused_ordering(892) 00:14:44.268 fused_ordering(893) 00:14:44.268 fused_ordering(894) 00:14:44.268 fused_ordering(895) 00:14:44.268 fused_ordering(896) 00:14:44.268 fused_ordering(897) 00:14:44.268 fused_ordering(898) 00:14:44.268 fused_ordering(899) 00:14:44.268 fused_ordering(900) 00:14:44.268 fused_ordering(901) 00:14:44.268 fused_ordering(902) 00:14:44.268 fused_ordering(903) 00:14:44.268 fused_ordering(904) 00:14:44.268 fused_ordering(905) 00:14:44.268 fused_ordering(906) 00:14:44.268 fused_ordering(907) 00:14:44.268 fused_ordering(908) 00:14:44.268 fused_ordering(909) 00:14:44.268 fused_ordering(910) 00:14:44.268 fused_ordering(911) 00:14:44.268 fused_ordering(912) 00:14:44.268 fused_ordering(913) 00:14:44.268 fused_ordering(914) 00:14:44.268 fused_ordering(915) 00:14:44.268 fused_ordering(916) 00:14:44.268 fused_ordering(917) 00:14:44.268 fused_ordering(918) 00:14:44.268 fused_ordering(919) 00:14:44.268 fused_ordering(920) 00:14:44.268 fused_ordering(921) 00:14:44.268 fused_ordering(922) 00:14:44.268 fused_ordering(923) 00:14:44.268 fused_ordering(924) 00:14:44.268 fused_ordering(925) 00:14:44.268 fused_ordering(926) 00:14:44.268 fused_ordering(927) 00:14:44.268 fused_ordering(928) 00:14:44.268 fused_ordering(929) 00:14:44.268 fused_ordering(930) 00:14:44.268 fused_ordering(931) 00:14:44.268 fused_ordering(932) 00:14:44.268 fused_ordering(933) 00:14:44.268 fused_ordering(934) 00:14:44.268 fused_ordering(935) 00:14:44.268 fused_ordering(936) 00:14:44.268 fused_ordering(937) 00:14:44.268 fused_ordering(938) 00:14:44.268 fused_ordering(939) 00:14:44.268 fused_ordering(940) 00:14:44.268 fused_ordering(941) 00:14:44.268 fused_ordering(942) 00:14:44.268 fused_ordering(943) 00:14:44.268 fused_ordering(944) 00:14:44.268 fused_ordering(945) 00:14:44.268 fused_ordering(946) 00:14:44.268 fused_ordering(947) 00:14:44.268 fused_ordering(948) 00:14:44.268 fused_ordering(949) 00:14:44.268 fused_ordering(950) 00:14:44.268 fused_ordering(951) 00:14:44.268 fused_ordering(952) 00:14:44.268 fused_ordering(953) 00:14:44.268 fused_ordering(954) 00:14:44.268 fused_ordering(955) 00:14:44.268 fused_ordering(956) 00:14:44.268 fused_ordering(957) 00:14:44.268 fused_ordering(958) 00:14:44.268 fused_ordering(959) 00:14:44.268 fused_ordering(960) 00:14:44.268 fused_ordering(961) 00:14:44.268 fused_ordering(962) 00:14:44.268 fused_ordering(963) 00:14:44.268 fused_ordering(964) 00:14:44.268 fused_ordering(965) 00:14:44.268 fused_ordering(966) 00:14:44.268 fused_ordering(967) 00:14:44.268 fused_ordering(968) 00:14:44.268 fused_ordering(969) 00:14:44.268 fused_ordering(970) 00:14:44.268 fused_ordering(971) 00:14:44.268 fused_ordering(972) 00:14:44.268 fused_ordering(973) 00:14:44.268 fused_ordering(974) 00:14:44.268 fused_ordering(975) 00:14:44.268 fused_ordering(976) 00:14:44.268 fused_ordering(977) 00:14:44.268 fused_ordering(978) 00:14:44.268 fused_ordering(979) 00:14:44.268 fused_ordering(980) 00:14:44.268 fused_ordering(981) 00:14:44.268 fused_ordering(982) 00:14:44.268 fused_ordering(983) 00:14:44.268 fused_ordering(984) 00:14:44.268 fused_ordering(985) 00:14:44.268 fused_ordering(986) 00:14:44.268 fused_ordering(987) 00:14:44.268 fused_ordering(988) 00:14:44.268 fused_ordering(989) 00:14:44.268 fused_ordering(990) 00:14:44.268 fused_ordering(991) 00:14:44.268 fused_ordering(992) 00:14:44.268 fused_ordering(993) 00:14:44.268 fused_ordering(994) 00:14:44.268 fused_ordering(995) 00:14:44.268 fused_ordering(996) 00:14:44.268 fused_ordering(997) 00:14:44.268 fused_ordering(998) 00:14:44.268 fused_ordering(999) 00:14:44.268 fused_ordering(1000) 00:14:44.269 fused_ordering(1001) 00:14:44.269 fused_ordering(1002) 00:14:44.269 fused_ordering(1003) 00:14:44.269 fused_ordering(1004) 00:14:44.269 fused_ordering(1005) 00:14:44.269 fused_ordering(1006) 00:14:44.269 fused_ordering(1007) 00:14:44.269 fused_ordering(1008) 00:14:44.269 fused_ordering(1009) 00:14:44.269 fused_ordering(1010) 00:14:44.269 fused_ordering(1011) 00:14:44.269 fused_ordering(1012) 00:14:44.269 fused_ordering(1013) 00:14:44.269 fused_ordering(1014) 00:14:44.269 fused_ordering(1015) 00:14:44.269 fused_ordering(1016) 00:14:44.269 fused_ordering(1017) 00:14:44.269 fused_ordering(1018) 00:14:44.269 fused_ordering(1019) 00:14:44.269 fused_ordering(1020) 00:14:44.269 fused_ordering(1021) 00:14:44.269 fused_ordering(1022) 00:14:44.269 fused_ordering(1023) 00:14:44.269 23:00:16 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:44.269 23:00:16 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:44.269 23:00:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:44.269 23:00:16 -- nvmf/common.sh@116 -- # sync 00:14:44.269 23:00:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:44.269 23:00:16 -- nvmf/common.sh@119 -- # set +e 00:14:44.269 23:00:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:44.269 23:00:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:44.269 rmmod nvme_tcp 00:14:44.269 rmmod nvme_fabrics 00:14:44.526 rmmod nvme_keyring 00:14:44.526 23:00:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:44.526 23:00:16 -- nvmf/common.sh@123 -- # set -e 00:14:44.526 23:00:16 -- nvmf/common.sh@124 -- # return 0 00:14:44.526 23:00:16 -- nvmf/common.sh@477 -- # '[' -n 3155667 ']' 00:14:44.526 23:00:16 -- nvmf/common.sh@478 -- # killprocess 3155667 00:14:44.526 23:00:16 -- common/autotest_common.sh@926 -- # '[' -z 3155667 ']' 00:14:44.526 23:00:16 -- common/autotest_common.sh@930 -- # kill -0 3155667 00:14:44.526 23:00:16 -- common/autotest_common.sh@931 -- # uname 00:14:44.526 23:00:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:44.526 23:00:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3155667 00:14:44.526 23:00:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:44.526 23:00:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:44.526 23:00:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3155667' 00:14:44.526 killing process with pid 3155667 00:14:44.526 23:00:16 -- common/autotest_common.sh@945 -- # kill 3155667 00:14:44.526 23:00:16 -- common/autotest_common.sh@950 -- # wait 3155667 00:14:44.526 23:00:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:44.526 23:00:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:44.526 23:00:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:44.526 23:00:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:44.526 23:00:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:44.526 23:00:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.526 23:00:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.526 23:00:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.061 23:00:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:47.061 00:14:47.061 real 0m11.945s 00:14:47.061 user 0m6.108s 00:14:47.061 sys 0m6.761s 00:14:47.061 23:00:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:47.061 23:00:19 -- common/autotest_common.sh@10 -- # set +x 00:14:47.061 ************************************ 00:14:47.061 END TEST nvmf_fused_ordering 00:14:47.061 ************************************ 00:14:47.061 23:00:19 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:47.061 23:00:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:47.061 23:00:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:47.061 23:00:19 -- common/autotest_common.sh@10 -- # set +x 00:14:47.061 ************************************ 00:14:47.061 START TEST nvmf_delete_subsystem 00:14:47.061 ************************************ 00:14:47.061 23:00:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:47.061 * Looking for test storage... 00:14:47.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:47.061 23:00:19 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:47.061 23:00:19 -- nvmf/common.sh@7 -- # uname -s 00:14:47.061 23:00:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.061 23:00:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.061 23:00:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.061 23:00:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.061 23:00:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.061 23:00:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.061 23:00:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.061 23:00:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.061 23:00:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.061 23:00:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.061 23:00:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:47.061 23:00:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:47.061 23:00:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.061 23:00:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.061 23:00:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:47.061 23:00:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:47.061 23:00:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.061 23:00:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.061 23:00:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.061 23:00:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.061 23:00:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.061 23:00:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.061 23:00:19 -- paths/export.sh@5 -- # export PATH 00:14:47.061 23:00:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.061 23:00:19 -- nvmf/common.sh@46 -- # : 0 00:14:47.061 23:00:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:47.061 23:00:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:47.061 23:00:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:47.061 23:00:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.061 23:00:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.061 23:00:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:47.061 23:00:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:47.061 23:00:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:47.061 23:00:19 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:47.061 23:00:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:47.061 23:00:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.061 23:00:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:47.061 23:00:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:47.061 23:00:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:47.062 23:00:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.062 23:00:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.062 23:00:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.062 23:00:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:47.062 23:00:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:47.062 23:00:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:47.062 23:00:19 -- common/autotest_common.sh@10 -- # set +x 00:14:53.661 23:00:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:53.661 23:00:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:53.661 23:00:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:53.661 23:00:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:53.661 23:00:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:53.661 23:00:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:53.661 23:00:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:53.661 23:00:25 -- nvmf/common.sh@294 -- # net_devs=() 00:14:53.661 23:00:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:53.661 23:00:25 -- nvmf/common.sh@295 -- # e810=() 00:14:53.661 23:00:25 -- nvmf/common.sh@295 -- # local -ga e810 00:14:53.661 23:00:25 -- nvmf/common.sh@296 -- # x722=() 00:14:53.661 23:00:25 -- nvmf/common.sh@296 -- # local -ga x722 00:14:53.661 23:00:25 -- nvmf/common.sh@297 -- # mlx=() 00:14:53.661 23:00:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:53.661 23:00:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:53.661 23:00:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:53.661 23:00:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:53.661 23:00:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:53.661 23:00:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:53.661 23:00:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:53.661 23:00:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:53.661 23:00:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:53.661 23:00:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:53.661 23:00:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:53.661 23:00:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:53.661 23:00:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:53.661 23:00:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:53.661 23:00:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:53.661 23:00:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:53.661 23:00:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:53.661 23:00:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:53.661 23:00:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:53.661 23:00:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:53.661 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:53.661 23:00:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:53.661 23:00:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:53.661 23:00:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.661 23:00:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.661 23:00:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:53.661 23:00:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:53.661 23:00:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:53.661 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:53.661 23:00:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:53.661 23:00:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:53.661 23:00:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.661 23:00:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.661 23:00:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:53.661 23:00:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:53.661 23:00:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:53.661 23:00:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:53.661 23:00:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:53.661 23:00:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.661 23:00:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:53.661 23:00:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.661 23:00:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:53.661 Found net devices under 0000:af:00.0: cvl_0_0 00:14:53.661 23:00:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.661 23:00:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:53.661 23:00:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.661 23:00:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:53.661 23:00:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.661 23:00:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:53.661 Found net devices under 0000:af:00.1: cvl_0_1 00:14:53.661 23:00:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.661 23:00:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:53.661 23:00:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:53.661 23:00:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:53.661 23:00:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:53.661 23:00:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:53.661 23:00:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.661 23:00:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:53.661 23:00:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:53.661 23:00:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:53.661 23:00:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:53.661 23:00:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:53.661 23:00:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:53.661 23:00:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:53.661 23:00:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.661 23:00:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:53.661 23:00:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:53.661 23:00:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:53.661 23:00:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:53.661 23:00:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:53.661 23:00:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:53.661 23:00:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:53.661 23:00:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:53.661 23:00:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:53.661 23:00:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:53.661 23:00:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:53.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:14:53.661 00:14:53.661 --- 10.0.0.2 ping statistics --- 00:14:53.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.661 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:14:53.661 23:00:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:53.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:14:53.661 00:14:53.661 --- 10.0.0.1 ping statistics --- 00:14:53.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.661 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:14:53.661 23:00:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.661 23:00:25 -- nvmf/common.sh@410 -- # return 0 00:14:53.661 23:00:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:53.661 23:00:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.661 23:00:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:53.661 23:00:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:53.661 23:00:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.661 23:00:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:53.661 23:00:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:53.661 23:00:25 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:53.661 23:00:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:53.661 23:00:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:53.661 23:00:25 -- common/autotest_common.sh@10 -- # set +x 00:14:53.661 23:00:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:53.662 23:00:25 -- nvmf/common.sh@469 -- # nvmfpid=3160030 00:14:53.662 23:00:25 -- nvmf/common.sh@470 -- # waitforlisten 3160030 00:14:53.662 23:00:25 -- common/autotest_common.sh@819 -- # '[' -z 3160030 ']' 00:14:53.662 23:00:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.662 23:00:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:53.662 23:00:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.662 23:00:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:53.662 23:00:25 -- common/autotest_common.sh@10 -- # set +x 00:14:53.662 [2024-07-24 23:00:25.991543] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:53.662 [2024-07-24 23:00:25.991590] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.662 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.662 [2024-07-24 23:00:26.061951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:53.921 [2024-07-24 23:00:26.099611] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:53.921 [2024-07-24 23:00:26.099756] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.921 [2024-07-24 23:00:26.099767] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.921 [2024-07-24 23:00:26.099776] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.921 [2024-07-24 23:00:26.099812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.921 [2024-07-24 23:00:26.099815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.488 23:00:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:54.488 23:00:26 -- common/autotest_common.sh@852 -- # return 0 00:14:54.488 23:00:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:54.488 23:00:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:54.488 23:00:26 -- common/autotest_common.sh@10 -- # set +x 00:14:54.488 23:00:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.488 23:00:26 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:54.488 23:00:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:54.488 23:00:26 -- common/autotest_common.sh@10 -- # set +x 00:14:54.488 [2024-07-24 23:00:26.855685] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.488 23:00:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:54.488 23:00:26 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:54.488 23:00:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:54.488 23:00:26 -- common/autotest_common.sh@10 -- # set +x 00:14:54.488 23:00:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:54.488 23:00:26 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:54.488 23:00:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:54.488 23:00:26 -- common/autotest_common.sh@10 -- # set +x 00:14:54.488 [2024-07-24 23:00:26.879864] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:54.488 23:00:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:54.488 23:00:26 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:54.488 23:00:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:54.488 23:00:26 -- common/autotest_common.sh@10 -- # set +x 00:14:54.488 NULL1 00:14:54.488 23:00:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:54.488 23:00:26 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:54.488 23:00:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:54.488 23:00:26 -- common/autotest_common.sh@10 -- # set +x 00:14:54.488 Delay0 00:14:54.488 23:00:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:54.488 23:00:26 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.488 23:00:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:54.488 23:00:26 -- common/autotest_common.sh@10 -- # set +x 00:14:54.488 23:00:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:54.488 23:00:26 -- target/delete_subsystem.sh@28 -- # perf_pid=3160169 00:14:54.488 23:00:26 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:54.488 23:00:26 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:54.747 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.747 [2024-07-24 23:00:26.966449] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:56.654 23:00:28 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:56.654 23:00:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.654 23:00:28 -- common/autotest_common.sh@10 -- # set +x 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 starting I/O failed: -6 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 starting I/O failed: -6 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 starting I/O failed: -6 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 [2024-07-24 23:00:29.007472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcbd000bf20 is same with the state(5) to be set 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 starting I/O failed: -6 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 starting I/O failed: -6 00:14:56.654 Read completed with error (sct=0, sc=8) 00:14:56.654 Write completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 Write completed with error (sct=0, sc=8) 00:14:56.655 Write completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Write completed with error (sct=0, sc=8) 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Write completed with error (sct=0, sc=8) 00:14:56.655 Write completed with error (sct=0, sc=8) 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 Write completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Write completed with error (sct=0, sc=8) 00:14:56.655 Write completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 Write completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Write completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Write completed with error (sct=0, sc=8) 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Write completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Write completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Write completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 Write completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 Read completed with error (sct=0, sc=8) 00:14:56.655 starting I/O failed: -6 00:14:56.655 [2024-07-24 23:00:29.008087] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcbd000c480 is same with the state(5) to be set 00:14:57.595 [2024-07-24 23:00:29.980617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114dc30 is same with the state(5) to be set 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 [2024-07-24 23:00:30.005891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcbd000c1d0 is same with the state(5) to be set 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 [2024-07-24 23:00:30.009397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114c0e0 is same with the state(5) to be set 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 [2024-07-24 23:00:30.009745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1166f80 is same with the state(5) to be set 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Write completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 Read completed with error (sct=0, sc=8) 00:14:57.595 [2024-07-24 23:00:30.009911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11673b0 is same with the state(5) to be set 00:14:57.595 [2024-07-24 23:00:30.010506] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114dc30 (9): Bad file descriptor 00:14:57.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:57.595 23:00:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.595 23:00:30 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:57.595 23:00:30 -- target/delete_subsystem.sh@35 -- # kill -0 3160169 00:14:57.595 23:00:30 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:57.595 Initializing NVMe Controllers 00:14:57.595 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:57.595 Controller IO queue size 128, less than required. 00:14:57.595 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:57.595 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:57.595 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:57.595 Initialization complete. Launching workers. 00:14:57.595 ======================================================== 00:14:57.595 Latency(us) 00:14:57.595 Device Information : IOPS MiB/s Average min max 00:14:57.595 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.56 0.10 943696.25 728.11 1012588.12 00:14:57.595 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.31 0.08 895127.40 311.00 1010632.09 00:14:57.595 ======================================================== 00:14:57.595 Total : 357.87 0.17 921668.49 311.00 1012588.12 00:14:57.595 00:14:58.162 23:00:30 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:58.162 23:00:30 -- target/delete_subsystem.sh@35 -- # kill -0 3160169 00:14:58.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3160169) - No such process 00:14:58.162 23:00:30 -- target/delete_subsystem.sh@45 -- # NOT wait 3160169 00:14:58.162 23:00:30 -- common/autotest_common.sh@640 -- # local es=0 00:14:58.162 23:00:30 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 3160169 00:14:58.162 23:00:30 -- common/autotest_common.sh@628 -- # local arg=wait 00:14:58.162 23:00:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:58.162 23:00:30 -- common/autotest_common.sh@632 -- # type -t wait 00:14:58.162 23:00:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:58.162 23:00:30 -- common/autotest_common.sh@643 -- # wait 3160169 00:14:58.162 23:00:30 -- common/autotest_common.sh@643 -- # es=1 00:14:58.162 23:00:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:58.162 23:00:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:58.162 23:00:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:58.162 23:00:30 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:58.162 23:00:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.162 23:00:30 -- common/autotest_common.sh@10 -- # set +x 00:14:58.163 23:00:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.163 23:00:30 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.163 23:00:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.163 23:00:30 -- common/autotest_common.sh@10 -- # set +x 00:14:58.163 [2024-07-24 23:00:30.539049] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.163 23:00:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.163 23:00:30 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:58.163 23:00:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.163 23:00:30 -- common/autotest_common.sh@10 -- # set +x 00:14:58.163 23:00:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.163 23:00:30 -- target/delete_subsystem.sh@54 -- # perf_pid=3160812 00:14:58.163 23:00:30 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:58.163 23:00:30 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:58.163 23:00:30 -- target/delete_subsystem.sh@57 -- # kill -0 3160812 00:14:58.163 23:00:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:58.163 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.422 [2024-07-24 23:00:30.608634] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:58.684 23:00:31 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:58.684 23:00:31 -- target/delete_subsystem.sh@57 -- # kill -0 3160812 00:14:58.684 23:00:31 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:59.252 23:00:31 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:59.252 23:00:31 -- target/delete_subsystem.sh@57 -- # kill -0 3160812 00:14:59.252 23:00:31 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:59.820 23:00:32 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:59.820 23:00:32 -- target/delete_subsystem.sh@57 -- # kill -0 3160812 00:14:59.820 23:00:32 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:00.388 23:00:32 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:00.388 23:00:32 -- target/delete_subsystem.sh@57 -- # kill -0 3160812 00:15:00.388 23:00:32 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:00.955 23:00:33 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:00.955 23:00:33 -- target/delete_subsystem.sh@57 -- # kill -0 3160812 00:15:00.955 23:00:33 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:01.213 23:00:33 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:01.213 23:00:33 -- target/delete_subsystem.sh@57 -- # kill -0 3160812 00:15:01.213 23:00:33 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:01.486 Initializing NVMe Controllers 00:15:01.486 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:01.486 Controller IO queue size 128, less than required. 00:15:01.486 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:01.486 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:01.486 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:01.486 Initialization complete. Launching workers. 00:15:01.486 ======================================================== 00:15:01.486 Latency(us) 00:15:01.486 Device Information : IOPS MiB/s Average min max 00:15:01.486 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003347.16 1000205.95 1010191.44 00:15:01.486 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005185.18 1000292.65 1041472.09 00:15:01.486 ======================================================== 00:15:01.486 Total : 256.00 0.12 1004266.17 1000205.95 1041472.09 00:15:01.486 00:15:01.744 23:00:34 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:01.744 23:00:34 -- target/delete_subsystem.sh@57 -- # kill -0 3160812 00:15:01.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3160812) - No such process 00:15:01.744 23:00:34 -- target/delete_subsystem.sh@67 -- # wait 3160812 00:15:01.744 23:00:34 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:01.744 23:00:34 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:01.744 23:00:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:01.744 23:00:34 -- nvmf/common.sh@116 -- # sync 00:15:01.744 23:00:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:01.744 23:00:34 -- nvmf/common.sh@119 -- # set +e 00:15:01.744 23:00:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:01.744 23:00:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:01.744 rmmod nvme_tcp 00:15:01.744 rmmod nvme_fabrics 00:15:01.744 rmmod nvme_keyring 00:15:01.744 23:00:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:01.744 23:00:34 -- nvmf/common.sh@123 -- # set -e 00:15:01.744 23:00:34 -- nvmf/common.sh@124 -- # return 0 00:15:01.744 23:00:34 -- nvmf/common.sh@477 -- # '[' -n 3160030 ']' 00:15:01.744 23:00:34 -- nvmf/common.sh@478 -- # killprocess 3160030 00:15:01.744 23:00:34 -- common/autotest_common.sh@926 -- # '[' -z 3160030 ']' 00:15:01.744 23:00:34 -- common/autotest_common.sh@930 -- # kill -0 3160030 00:15:01.744 23:00:34 -- common/autotest_common.sh@931 -- # uname 00:15:01.744 23:00:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:01.744 23:00:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3160030 00:15:02.003 23:00:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:02.003 23:00:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:02.003 23:00:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3160030' 00:15:02.003 killing process with pid 3160030 00:15:02.003 23:00:34 -- common/autotest_common.sh@945 -- # kill 3160030 00:15:02.003 23:00:34 -- common/autotest_common.sh@950 -- # wait 3160030 00:15:02.003 23:00:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:02.003 23:00:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:02.003 23:00:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:02.003 23:00:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.003 23:00:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:02.003 23:00:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.003 23:00:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.003 23:00:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.547 23:00:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:04.548 00:15:04.548 real 0m17.378s 00:15:04.548 user 0m29.693s 00:15:04.548 sys 0m6.739s 00:15:04.548 23:00:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:04.548 23:00:36 -- common/autotest_common.sh@10 -- # set +x 00:15:04.548 ************************************ 00:15:04.548 END TEST nvmf_delete_subsystem 00:15:04.548 ************************************ 00:15:04.548 23:00:36 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:15:04.548 23:00:36 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:04.548 23:00:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:04.548 23:00:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:04.548 23:00:36 -- common/autotest_common.sh@10 -- # set +x 00:15:04.548 ************************************ 00:15:04.548 START TEST nvmf_nvme_cli 00:15:04.548 ************************************ 00:15:04.548 23:00:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:04.548 * Looking for test storage... 00:15:04.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.548 23:00:36 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.548 23:00:36 -- nvmf/common.sh@7 -- # uname -s 00:15:04.548 23:00:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.548 23:00:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.548 23:00:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.548 23:00:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.548 23:00:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.548 23:00:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.548 23:00:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.548 23:00:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.548 23:00:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.548 23:00:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.548 23:00:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:04.548 23:00:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:04.548 23:00:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.548 23:00:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.548 23:00:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.548 23:00:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.548 23:00:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.548 23:00:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.548 23:00:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.548 23:00:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.548 23:00:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.548 23:00:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.548 23:00:36 -- paths/export.sh@5 -- # export PATH 00:15:04.548 23:00:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.548 23:00:36 -- nvmf/common.sh@46 -- # : 0 00:15:04.548 23:00:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:04.548 23:00:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:04.548 23:00:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:04.548 23:00:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.548 23:00:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.548 23:00:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:04.548 23:00:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:04.548 23:00:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:04.548 23:00:36 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:04.548 23:00:36 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:04.548 23:00:36 -- target/nvme_cli.sh@14 -- # devs=() 00:15:04.548 23:00:36 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:04.548 23:00:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:04.548 23:00:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.548 23:00:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:04.548 23:00:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:04.548 23:00:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:04.548 23:00:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.548 23:00:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.548 23:00:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.548 23:00:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:04.548 23:00:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:04.548 23:00:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:04.548 23:00:36 -- common/autotest_common.sh@10 -- # set +x 00:15:11.114 23:00:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:11.114 23:00:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:11.114 23:00:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:11.114 23:00:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:11.114 23:00:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:11.114 23:00:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:11.114 23:00:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:11.114 23:00:42 -- nvmf/common.sh@294 -- # net_devs=() 00:15:11.114 23:00:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:11.114 23:00:42 -- nvmf/common.sh@295 -- # e810=() 00:15:11.114 23:00:42 -- nvmf/common.sh@295 -- # local -ga e810 00:15:11.114 23:00:42 -- nvmf/common.sh@296 -- # x722=() 00:15:11.114 23:00:42 -- nvmf/common.sh@296 -- # local -ga x722 00:15:11.114 23:00:42 -- nvmf/common.sh@297 -- # mlx=() 00:15:11.114 23:00:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:11.114 23:00:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:11.114 23:00:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:11.115 23:00:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:11.115 23:00:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:11.115 23:00:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:11.115 23:00:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:11.115 23:00:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:11.115 23:00:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:11.115 23:00:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:11.115 23:00:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:11.115 23:00:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:11.115 23:00:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:11.115 23:00:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:11.115 23:00:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:11.115 23:00:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:11.115 23:00:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:11.115 23:00:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:11.115 23:00:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:11.115 23:00:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:11.115 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:11.115 23:00:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:11.115 23:00:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:11.115 23:00:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.115 23:00:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.115 23:00:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:11.115 23:00:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:11.115 23:00:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:11.115 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:11.115 23:00:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:11.115 23:00:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:11.115 23:00:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.115 23:00:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.115 23:00:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:11.115 23:00:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:11.115 23:00:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:11.115 23:00:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:11.115 23:00:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:11.115 23:00:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.115 23:00:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:11.115 23:00:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.115 23:00:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:11.115 Found net devices under 0000:af:00.0: cvl_0_0 00:15:11.115 23:00:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.115 23:00:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:11.115 23:00:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.115 23:00:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:11.115 23:00:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.115 23:00:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:11.115 Found net devices under 0000:af:00.1: cvl_0_1 00:15:11.115 23:00:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.115 23:00:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:11.115 23:00:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:11.115 23:00:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:11.115 23:00:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:11.115 23:00:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:11.115 23:00:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.115 23:00:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.115 23:00:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:11.115 23:00:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:11.115 23:00:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:11.115 23:00:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:11.115 23:00:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:11.115 23:00:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:11.115 23:00:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.115 23:00:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:11.115 23:00:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:11.115 23:00:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:11.115 23:00:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:11.115 23:00:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:11.115 23:00:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:11.115 23:00:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:11.115 23:00:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:11.115 23:00:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:11.115 23:00:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:11.115 23:00:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:11.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:15:11.115 00:15:11.115 --- 10.0.0.2 ping statistics --- 00:15:11.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.115 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:15:11.115 23:00:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:11.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:15:11.115 00:15:11.115 --- 10.0.0.1 ping statistics --- 00:15:11.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.115 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:15:11.115 23:00:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.115 23:00:42 -- nvmf/common.sh@410 -- # return 0 00:15:11.115 23:00:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:11.115 23:00:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.115 23:00:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:11.115 23:00:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:11.115 23:00:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.115 23:00:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:11.115 23:00:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:11.115 23:00:42 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:11.115 23:00:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:11.115 23:00:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:11.115 23:00:42 -- common/autotest_common.sh@10 -- # set +x 00:15:11.115 23:00:42 -- nvmf/common.sh@469 -- # nvmfpid=3164949 00:15:11.115 23:00:42 -- nvmf/common.sh@470 -- # waitforlisten 3164949 00:15:11.115 23:00:42 -- common/autotest_common.sh@819 -- # '[' -z 3164949 ']' 00:15:11.115 23:00:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.115 23:00:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:11.115 23:00:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.115 23:00:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:11.115 23:00:42 -- common/autotest_common.sh@10 -- # set +x 00:15:11.115 23:00:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:11.115 [2024-07-24 23:00:42.761131] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:11.115 [2024-07-24 23:00:42.761182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.115 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.115 [2024-07-24 23:00:42.839893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:11.115 [2024-07-24 23:00:42.878770] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:11.115 [2024-07-24 23:00:42.878880] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.115 [2024-07-24 23:00:42.878890] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.115 [2024-07-24 23:00:42.878899] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.115 [2024-07-24 23:00:42.878943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.115 [2024-07-24 23:00:42.878961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.115 [2024-07-24 23:00:42.879178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:11.115 [2024-07-24 23:00:42.879180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.374 23:00:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:11.374 23:00:43 -- common/autotest_common.sh@852 -- # return 0 00:15:11.374 23:00:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:11.374 23:00:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:11.374 23:00:43 -- common/autotest_common.sh@10 -- # set +x 00:15:11.374 23:00:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.374 23:00:43 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:11.374 23:00:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.374 23:00:43 -- common/autotest_common.sh@10 -- # set +x 00:15:11.374 [2024-07-24 23:00:43.602060] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.374 23:00:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.374 23:00:43 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:11.374 23:00:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.374 23:00:43 -- common/autotest_common.sh@10 -- # set +x 00:15:11.374 Malloc0 00:15:11.374 23:00:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.375 23:00:43 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:11.375 23:00:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.375 23:00:43 -- common/autotest_common.sh@10 -- # set +x 00:15:11.375 Malloc1 00:15:11.375 23:00:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.375 23:00:43 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:11.375 23:00:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.375 23:00:43 -- common/autotest_common.sh@10 -- # set +x 00:15:11.375 23:00:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.375 23:00:43 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:11.375 23:00:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.375 23:00:43 -- common/autotest_common.sh@10 -- # set +x 00:15:11.375 23:00:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.375 23:00:43 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:11.375 23:00:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.375 23:00:43 -- common/autotest_common.sh@10 -- # set +x 00:15:11.375 23:00:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.375 23:00:43 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.375 23:00:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.375 23:00:43 -- common/autotest_common.sh@10 -- # set +x 00:15:11.375 [2024-07-24 23:00:43.686505] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.375 23:00:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.375 23:00:43 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:11.375 23:00:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.375 23:00:43 -- common/autotest_common.sh@10 -- # set +x 00:15:11.375 23:00:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.375 23:00:43 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:15:11.634 00:15:11.634 Discovery Log Number of Records 2, Generation counter 2 00:15:11.634 =====Discovery Log Entry 0====== 00:15:11.634 trtype: tcp 00:15:11.634 adrfam: ipv4 00:15:11.634 subtype: current discovery subsystem 00:15:11.634 treq: not required 00:15:11.634 portid: 0 00:15:11.634 trsvcid: 4420 00:15:11.634 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:11.634 traddr: 10.0.0.2 00:15:11.634 eflags: explicit discovery connections, duplicate discovery information 00:15:11.634 sectype: none 00:15:11.634 =====Discovery Log Entry 1====== 00:15:11.634 trtype: tcp 00:15:11.634 adrfam: ipv4 00:15:11.634 subtype: nvme subsystem 00:15:11.634 treq: not required 00:15:11.634 portid: 0 00:15:11.634 trsvcid: 4420 00:15:11.634 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:11.634 traddr: 10.0.0.2 00:15:11.634 eflags: none 00:15:11.634 sectype: none 00:15:11.634 23:00:43 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:11.634 23:00:43 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:11.634 23:00:43 -- nvmf/common.sh@510 -- # local dev _ 00:15:11.634 23:00:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:11.634 23:00:43 -- nvmf/common.sh@509 -- # nvme list 00:15:11.634 23:00:43 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:11.634 23:00:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:11.634 23:00:43 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:11.634 23:00:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:11.634 23:00:43 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:11.634 23:00:43 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:13.085 23:00:45 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:13.085 23:00:45 -- common/autotest_common.sh@1177 -- # local i=0 00:15:13.085 23:00:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:13.085 23:00:45 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:15:13.085 23:00:45 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:15:13.085 23:00:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:14.993 23:00:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:14.993 23:00:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:14.993 23:00:47 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:14.993 23:00:47 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:15:14.993 23:00:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:14.993 23:00:47 -- common/autotest_common.sh@1187 -- # return 0 00:15:14.993 23:00:47 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:14.993 23:00:47 -- nvmf/common.sh@510 -- # local dev _ 00:15:14.993 23:00:47 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.993 23:00:47 -- nvmf/common.sh@509 -- # nvme list 00:15:14.993 23:00:47 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:14.993 23:00:47 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.994 23:00:47 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:14.994 23:00:47 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.994 23:00:47 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:14.994 23:00:47 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:14.994 23:00:47 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.994 23:00:47 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:14.994 23:00:47 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:14.994 23:00:47 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.994 23:00:47 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:14.994 /dev/nvme0n1 ]] 00:15:14.994 23:00:47 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:14.994 23:00:47 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:14.994 23:00:47 -- nvmf/common.sh@510 -- # local dev _ 00:15:14.994 23:00:47 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.994 23:00:47 -- nvmf/common.sh@509 -- # nvme list 00:15:15.253 23:00:47 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:15.253 23:00:47 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:15.253 23:00:47 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:15.253 23:00:47 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:15.253 23:00:47 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:15.253 23:00:47 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:15.253 23:00:47 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:15.253 23:00:47 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:15.253 23:00:47 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:15.253 23:00:47 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:15.253 23:00:47 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:15.253 23:00:47 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:15.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.512 23:00:47 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:15.512 23:00:47 -- common/autotest_common.sh@1198 -- # local i=0 00:15:15.512 23:00:47 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:15.512 23:00:47 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.512 23:00:47 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:15.512 23:00:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.513 23:00:47 -- common/autotest_common.sh@1210 -- # return 0 00:15:15.513 23:00:47 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:15.513 23:00:47 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.513 23:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:15.513 23:00:47 -- common/autotest_common.sh@10 -- # set +x 00:15:15.513 23:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:15.513 23:00:47 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:15.513 23:00:47 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:15.513 23:00:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:15.513 23:00:47 -- nvmf/common.sh@116 -- # sync 00:15:15.513 23:00:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:15.513 23:00:47 -- nvmf/common.sh@119 -- # set +e 00:15:15.513 23:00:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:15.513 23:00:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:15.513 rmmod nvme_tcp 00:15:15.513 rmmod nvme_fabrics 00:15:15.513 rmmod nvme_keyring 00:15:15.513 23:00:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:15.513 23:00:47 -- nvmf/common.sh@123 -- # set -e 00:15:15.513 23:00:47 -- nvmf/common.sh@124 -- # return 0 00:15:15.513 23:00:47 -- nvmf/common.sh@477 -- # '[' -n 3164949 ']' 00:15:15.513 23:00:47 -- nvmf/common.sh@478 -- # killprocess 3164949 00:15:15.513 23:00:47 -- common/autotest_common.sh@926 -- # '[' -z 3164949 ']' 00:15:15.513 23:00:47 -- common/autotest_common.sh@930 -- # kill -0 3164949 00:15:15.513 23:00:47 -- common/autotest_common.sh@931 -- # uname 00:15:15.513 23:00:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:15.513 23:00:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3164949 00:15:15.513 23:00:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:15.513 23:00:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:15.513 23:00:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3164949' 00:15:15.513 killing process with pid 3164949 00:15:15.513 23:00:47 -- common/autotest_common.sh@945 -- # kill 3164949 00:15:15.513 23:00:47 -- common/autotest_common.sh@950 -- # wait 3164949 00:15:15.773 23:00:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:15.773 23:00:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:15.773 23:00:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:15.773 23:00:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:15.773 23:00:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:15.773 23:00:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.773 23:00:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.773 23:00:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.312 23:00:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:18.312 00:15:18.312 real 0m13.726s 00:15:18.312 user 0m22.223s 00:15:18.312 sys 0m5.515s 00:15:18.312 23:00:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:18.312 23:00:50 -- common/autotest_common.sh@10 -- # set +x 00:15:18.312 ************************************ 00:15:18.312 END TEST nvmf_nvme_cli 00:15:18.312 ************************************ 00:15:18.312 23:00:50 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:15:18.312 23:00:50 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:18.312 23:00:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:18.312 23:00:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:18.312 23:00:50 -- common/autotest_common.sh@10 -- # set +x 00:15:18.312 ************************************ 00:15:18.312 START TEST nvmf_vfio_user 00:15:18.312 ************************************ 00:15:18.312 23:00:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:18.312 * Looking for test storage... 00:15:18.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:18.312 23:00:50 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:18.312 23:00:50 -- nvmf/common.sh@7 -- # uname -s 00:15:18.312 23:00:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.312 23:00:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.312 23:00:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.312 23:00:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.312 23:00:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:18.312 23:00:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:18.312 23:00:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.312 23:00:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:18.312 23:00:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.312 23:00:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:18.312 23:00:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:18.312 23:00:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:18.312 23:00:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.312 23:00:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:18.312 23:00:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:18.312 23:00:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:18.312 23:00:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.312 23:00:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.312 23:00:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.312 23:00:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.312 23:00:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.312 23:00:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.312 23:00:50 -- paths/export.sh@5 -- # export PATH 00:15:18.312 23:00:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.312 23:00:50 -- nvmf/common.sh@46 -- # : 0 00:15:18.312 23:00:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:18.312 23:00:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:18.312 23:00:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:18.312 23:00:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.312 23:00:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.312 23:00:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:18.312 23:00:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:18.312 23:00:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:18.312 23:00:50 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:18.312 23:00:50 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:18.312 23:00:50 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:18.312 23:00:50 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.312 23:00:50 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:18.312 23:00:50 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:18.312 23:00:50 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:18.312 23:00:50 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:18.312 23:00:50 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:18.312 23:00:50 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:18.312 23:00:50 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3166426 00:15:18.312 23:00:50 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3166426' 00:15:18.312 Process pid: 3166426 00:15:18.312 23:00:50 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:18.312 23:00:50 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:18.312 23:00:50 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3166426 00:15:18.312 23:00:50 -- common/autotest_common.sh@819 -- # '[' -z 3166426 ']' 00:15:18.312 23:00:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.312 23:00:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:18.312 23:00:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.313 23:00:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:18.313 23:00:50 -- common/autotest_common.sh@10 -- # set +x 00:15:18.313 [2024-07-24 23:00:50.472193] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:18.313 [2024-07-24 23:00:50.472246] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.313 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.313 [2024-07-24 23:00:50.545408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:18.313 [2024-07-24 23:00:50.583503] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:18.313 [2024-07-24 23:00:50.583625] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.313 [2024-07-24 23:00:50.583635] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.313 [2024-07-24 23:00:50.583644] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.313 [2024-07-24 23:00:50.583694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.313 [2024-07-24 23:00:50.583794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.313 [2024-07-24 23:00:50.583817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:18.313 [2024-07-24 23:00:50.583819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.881 23:00:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:18.882 23:00:51 -- common/autotest_common.sh@852 -- # return 0 00:15:18.882 23:00:51 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:20.291 23:00:52 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:20.291 23:00:52 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:20.291 23:00:52 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:20.291 23:00:52 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:20.291 23:00:52 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:20.291 23:00:52 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:20.291 Malloc1 00:15:20.291 23:00:52 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:20.550 23:00:52 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:20.808 23:00:53 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:20.808 23:00:53 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:20.808 23:00:53 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:20.808 23:00:53 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:21.066 Malloc2 00:15:21.066 23:00:53 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:21.324 23:00:53 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:21.324 23:00:53 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:21.584 23:00:53 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:21.584 23:00:53 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:21.584 23:00:53 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:21.584 23:00:53 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:21.584 23:00:53 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:21.584 23:00:53 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:21.584 [2024-07-24 23:00:53.911349] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:21.584 [2024-07-24 23:00:53.911377] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3166985 ] 00:15:21.584 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.584 [2024-07-24 23:00:53.941052] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:21.584 [2024-07-24 23:00:53.951067] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:21.584 [2024-07-24 23:00:53.951088] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd362f40000 00:15:21.584 [2024-07-24 23:00:53.952064] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.584 [2024-07-24 23:00:53.953064] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.584 [2024-07-24 23:00:53.954065] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.584 [2024-07-24 23:00:53.955073] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.584 [2024-07-24 23:00:53.956076] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.584 [2024-07-24 23:00:53.957077] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.584 [2024-07-24 23:00:53.958083] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.584 [2024-07-24 23:00:53.959091] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.584 [2024-07-24 23:00:53.960095] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:21.584 [2024-07-24 23:00:53.960107] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd361d06000 00:15:21.584 [2024-07-24 23:00:53.961002] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:21.584 [2024-07-24 23:00:53.973301] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:21.584 [2024-07-24 23:00:53.973329] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:21.584 [2024-07-24 23:00:53.976201] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:21.584 [2024-07-24 23:00:53.976242] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:21.584 [2024-07-24 23:00:53.976318] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:21.584 [2024-07-24 23:00:53.976343] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:21.584 [2024-07-24 23:00:53.976350] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:21.584 [2024-07-24 23:00:53.977198] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:21.584 [2024-07-24 23:00:53.977209] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:21.584 [2024-07-24 23:00:53.977218] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:21.584 [2024-07-24 23:00:53.978207] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:21.584 [2024-07-24 23:00:53.978218] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:21.584 [2024-07-24 23:00:53.978226] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:21.584 [2024-07-24 23:00:53.979213] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:21.584 [2024-07-24 23:00:53.979224] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:21.584 [2024-07-24 23:00:53.980216] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:21.584 [2024-07-24 23:00:53.980227] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:21.584 [2024-07-24 23:00:53.980233] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:21.584 [2024-07-24 23:00:53.980241] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:21.584 [2024-07-24 23:00:53.980348] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:21.584 [2024-07-24 23:00:53.980354] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:21.585 [2024-07-24 23:00:53.980361] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:21.585 [2024-07-24 23:00:53.981226] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:21.585 [2024-07-24 23:00:53.982229] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:21.585 [2024-07-24 23:00:53.983234] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:21.585 [2024-07-24 23:00:53.984261] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:21.585 [2024-07-24 23:00:53.985246] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:21.585 [2024-07-24 23:00:53.985255] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:21.585 [2024-07-24 23:00:53.985262] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:21.585 [2024-07-24 23:00:53.985280] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:21.585 [2024-07-24 23:00:53.985289] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:21.585 [2024-07-24 23:00:53.985306] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.585 [2024-07-24 23:00:53.985313] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.585 [2024-07-24 23:00:53.985328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.585 [2024-07-24 23:00:53.985370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:21.585 [2024-07-24 23:00:53.985382] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:21.585 [2024-07-24 23:00:53.985388] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:21.585 [2024-07-24 23:00:53.985394] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:21.585 [2024-07-24 23:00:53.985400] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:21.585 [2024-07-24 23:00:53.985406] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:21.585 [2024-07-24 23:00:53.985412] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:21.585 [2024-07-24 23:00:53.985418] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:21.585 [2024-07-24 23:00:53.985429] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:21.585 [2024-07-24 23:00:53.985440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:21.585 [2024-07-24 23:00:53.985453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:21.585 [2024-07-24 23:00:53.985466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.585 [2024-07-24 23:00:53.985475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.585 [2024-07-24 23:00:53.985484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.585 [2024-07-24 23:00:53.985493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.585 [2024-07-24 23:00:53.985499] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:21.585 [2024-07-24 23:00:53.985509] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:21.585 [2024-07-24 23:00:53.985519] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:21.585 [2024-07-24 23:00:53.985527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:21.585 [2024-07-24 23:00:53.985534] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:21.585 [2024-07-24 23:00:53.985541] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:21.585 [2024-07-24 23:00:53.985549] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:21.585 [2024-07-24 23:00:53.985559] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:21.585 [2024-07-24 23:00:53.985572] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:21.585 [2024-07-24 23:00:53.985584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:21.585 [2024-07-24 23:00:53.985633] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:21.585 [2024-07-24 23:00:53.985642] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:21.585 [2024-07-24 23:00:53.985650] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:21.585 [2024-07-24 23:00:53.985656] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:21.585 [2024-07-24 23:00:53.985663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:21.585 [2024-07-24 23:00:53.985677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:21.585 [2024-07-24 23:00:53.985765] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:21.585 [2024-07-24 23:00:53.985776] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:21.585 [2024-07-24 23:00:53.985785] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:21.585 [2024-07-24 23:00:53.985793] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.585 [2024-07-24 23:00:53.985799] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.585 [2024-07-24 23:00:53.985805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.585 [2024-07-24 23:00:53.985825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:21.585 [2024-07-24 23:00:53.985839] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:21.585 [2024-07-24 23:00:53.985848] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:21.585 [2024-07-24 23:00:53.985856] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.585 [2024-07-24 23:00:53.985861] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.585 [2024-07-24 23:00:53.985868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.585 [2024-07-24 23:00:53.985878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:21.585 [2024-07-24 23:00:53.985888] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:21.585 [2024-07-24 23:00:53.985896] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:21.585 [2024-07-24 23:00:53.985905] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:21.585 [2024-07-24 23:00:53.985912] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:21.585 [2024-07-24 23:00:53.985920] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:21.585 [2024-07-24 23:00:53.985927] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:21.585 [2024-07-24 23:00:53.985933] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:21.585 [2024-07-24 23:00:53.985939] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:21.585 [2024-07-24 23:00:53.985959] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:21.585 [2024-07-24 23:00:53.985970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:21.585 [2024-07-24 23:00:53.985983] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:21.585 [2024-07-24 23:00:53.985991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:21.585 [2024-07-24 23:00:53.986004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:21.585 [2024-07-24 23:00:53.986015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:21.585 [2024-07-24 23:00:53.986028] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:21.585 [2024-07-24 23:00:53.986042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:21.585 [2024-07-24 23:00:53.986054] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:21.585 [2024-07-24 23:00:53.986060] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:21.585 [2024-07-24 23:00:53.986065] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:21.585 [2024-07-24 23:00:53.986069] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:21.585 [2024-07-24 23:00:53.986076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:21.585 [2024-07-24 23:00:53.986084] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:21.585 [2024-07-24 23:00:53.986090] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:21.585 [2024-07-24 23:00:53.986097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:21.585 [2024-07-24 23:00:53.986104] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:21.585 [2024-07-24 23:00:53.986110] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.585 [2024-07-24 23:00:53.986117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.585 [2024-07-24 23:00:53.986125] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:21.585 [2024-07-24 23:00:53.986130] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:21.585 [2024-07-24 23:00:53.986137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:21.585 [2024-07-24 23:00:53.986145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:21.585 [2024-07-24 23:00:53.986162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:21.585 [2024-07-24 23:00:53.986173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:21.585 [2024-07-24 23:00:53.986181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:21.585 ===================================================== 00:15:21.585 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:21.585 ===================================================== 00:15:21.585 Controller Capabilities/Features 00:15:21.585 ================================ 00:15:21.585 Vendor ID: 4e58 00:15:21.585 Subsystem Vendor ID: 4e58 00:15:21.585 Serial Number: SPDK1 00:15:21.585 Model Number: SPDK bdev Controller 00:15:21.585 Firmware Version: 24.01.1 00:15:21.585 Recommended Arb Burst: 6 00:15:21.585 IEEE OUI Identifier: 8d 6b 50 00:15:21.585 Multi-path I/O 00:15:21.585 May have multiple subsystem ports: Yes 00:15:21.585 May have multiple controllers: Yes 00:15:21.585 Associated with SR-IOV VF: No 00:15:21.585 Max Data Transfer Size: 131072 00:15:21.585 Max Number of Namespaces: 32 00:15:21.585 Max Number of I/O Queues: 127 00:15:21.585 NVMe Specification Version (VS): 1.3 00:15:21.585 NVMe Specification Version (Identify): 1.3 00:15:21.585 Maximum Queue Entries: 256 00:15:21.585 Contiguous Queues Required: Yes 00:15:21.585 Arbitration Mechanisms Supported 00:15:21.585 Weighted Round Robin: Not Supported 00:15:21.585 Vendor Specific: Not Supported 00:15:21.585 Reset Timeout: 15000 ms 00:15:21.585 Doorbell Stride: 4 bytes 00:15:21.585 NVM Subsystem Reset: Not Supported 00:15:21.585 Command Sets Supported 00:15:21.585 NVM Command Set: Supported 00:15:21.585 Boot Partition: Not Supported 00:15:21.585 Memory Page Size Minimum: 4096 bytes 00:15:21.585 Memory Page Size Maximum: 4096 bytes 00:15:21.585 Persistent Memory Region: Not Supported 00:15:21.585 Optional Asynchronous Events Supported 00:15:21.586 Namespace Attribute Notices: Supported 00:15:21.586 Firmware Activation Notices: Not Supported 00:15:21.586 ANA Change Notices: Not Supported 00:15:21.586 PLE Aggregate Log Change Notices: Not Supported 00:15:21.586 LBA Status Info Alert Notices: Not Supported 00:15:21.586 EGE Aggregate Log Change Notices: Not Supported 00:15:21.586 Normal NVM Subsystem Shutdown event: Not Supported 00:15:21.586 Zone Descriptor Change Notices: Not Supported 00:15:21.586 Discovery Log Change Notices: Not Supported 00:15:21.586 Controller Attributes 00:15:21.586 128-bit Host Identifier: Supported 00:15:21.586 Non-Operational Permissive Mode: Not Supported 00:15:21.586 NVM Sets: Not Supported 00:15:21.586 Read Recovery Levels: Not Supported 00:15:21.586 Endurance Groups: Not Supported 00:15:21.586 Predictable Latency Mode: Not Supported 00:15:21.586 Traffic Based Keep ALive: Not Supported 00:15:21.586 Namespace Granularity: Not Supported 00:15:21.586 SQ Associations: Not Supported 00:15:21.586 UUID List: Not Supported 00:15:21.586 Multi-Domain Subsystem: Not Supported 00:15:21.586 Fixed Capacity Management: Not Supported 00:15:21.586 Variable Capacity Management: Not Supported 00:15:21.586 Delete Endurance Group: Not Supported 00:15:21.586 Delete NVM Set: Not Supported 00:15:21.586 Extended LBA Formats Supported: Not Supported 00:15:21.586 Flexible Data Placement Supported: Not Supported 00:15:21.586 00:15:21.586 Controller Memory Buffer Support 00:15:21.586 ================================ 00:15:21.586 Supported: No 00:15:21.586 00:15:21.586 Persistent Memory Region Support 00:15:21.586 ================================ 00:15:21.586 Supported: No 00:15:21.586 00:15:21.586 Admin Command Set Attributes 00:15:21.586 ============================ 00:15:21.586 Security Send/Receive: Not Supported 00:15:21.586 Format NVM: Not Supported 00:15:21.586 Firmware Activate/Download: Not Supported 00:15:21.586 Namespace Management: Not Supported 00:15:21.586 Device Self-Test: Not Supported 00:15:21.586 Directives: Not Supported 00:15:21.586 NVMe-MI: Not Supported 00:15:21.586 Virtualization Management: Not Supported 00:15:21.586 Doorbell Buffer Config: Not Supported 00:15:21.586 Get LBA Status Capability: Not Supported 00:15:21.586 Command & Feature Lockdown Capability: Not Supported 00:15:21.586 Abort Command Limit: 4 00:15:21.586 Async Event Request Limit: 4 00:15:21.586 Number of Firmware Slots: N/A 00:15:21.586 Firmware Slot 1 Read-Only: N/A 00:15:21.586 Firmware Activation Without Reset: N/A 00:15:21.586 Multiple Update Detection Support: N/A 00:15:21.586 Firmware Update Granularity: No Information Provided 00:15:21.586 Per-Namespace SMART Log: No 00:15:21.586 Asymmetric Namespace Access Log Page: Not Supported 00:15:21.586 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:21.586 Command Effects Log Page: Supported 00:15:21.586 Get Log Page Extended Data: Supported 00:15:21.586 Telemetry Log Pages: Not Supported 00:15:21.586 Persistent Event Log Pages: Not Supported 00:15:21.586 Supported Log Pages Log Page: May Support 00:15:21.586 Commands Supported & Effects Log Page: Not Supported 00:15:21.586 Feature Identifiers & Effects Log Page:May Support 00:15:21.586 NVMe-MI Commands & Effects Log Page: May Support 00:15:21.586 Data Area 4 for Telemetry Log: Not Supported 00:15:21.586 Error Log Page Entries Supported: 128 00:15:21.586 Keep Alive: Supported 00:15:21.586 Keep Alive Granularity: 10000 ms 00:15:21.586 00:15:21.586 NVM Command Set Attributes 00:15:21.586 ========================== 00:15:21.586 Submission Queue Entry Size 00:15:21.586 Max: 64 00:15:21.586 Min: 64 00:15:21.586 Completion Queue Entry Size 00:15:21.586 Max: 16 00:15:21.586 Min: 16 00:15:21.586 Number of Namespaces: 32 00:15:21.586 Compare Command: Supported 00:15:21.586 Write Uncorrectable Command: Not Supported 00:15:21.586 Dataset Management Command: Supported 00:15:21.586 Write Zeroes Command: Supported 00:15:21.586 Set Features Save Field: Not Supported 00:15:21.586 Reservations: Not Supported 00:15:21.586 Timestamp: Not Supported 00:15:21.586 Copy: Supported 00:15:21.586 Volatile Write Cache: Present 00:15:21.586 Atomic Write Unit (Normal): 1 00:15:21.586 Atomic Write Unit (PFail): 1 00:15:21.586 Atomic Compare & Write Unit: 1 00:15:21.586 Fused Compare & Write: Supported 00:15:21.586 Scatter-Gather List 00:15:21.586 SGL Command Set: Supported (Dword aligned) 00:15:21.586 SGL Keyed: Not Supported 00:15:21.586 SGL Bit Bucket Descriptor: Not Supported 00:15:21.586 SGL Metadata Pointer: Not Supported 00:15:21.586 Oversized SGL: Not Supported 00:15:21.586 SGL Metadata Address: Not Supported 00:15:21.586 SGL Offset: Not Supported 00:15:21.586 Transport SGL Data Block: Not Supported 00:15:21.586 Replay Protected Memory Block: Not Supported 00:15:21.586 00:15:21.586 Firmware Slot Information 00:15:21.586 ========================= 00:15:21.586 Active slot: 1 00:15:21.586 Slot 1 Firmware Revision: 24.01.1 00:15:21.586 00:15:21.586 00:15:21.586 Commands Supported and Effects 00:15:21.586 ============================== 00:15:21.586 Admin Commands 00:15:21.586 -------------- 00:15:21.586 Get Log Page (02h): Supported 00:15:21.586 Identify (06h): Supported 00:15:21.586 Abort (08h): Supported 00:15:21.586 Set Features (09h): Supported 00:15:21.586 Get Features (0Ah): Supported 00:15:21.586 Asynchronous Event Request (0Ch): Supported 00:15:21.586 Keep Alive (18h): Supported 00:15:21.586 I/O Commands 00:15:21.586 ------------ 00:15:21.586 Flush (00h): Supported LBA-Change 00:15:21.586 Write (01h): Supported LBA-Change 00:15:21.586 Read (02h): Supported 00:15:21.586 Compare (05h): Supported 00:15:21.586 Write Zeroes (08h): Supported LBA-Change 00:15:21.586 Dataset Management (09h): Supported LBA-Change 00:15:21.586 Copy (19h): Supported LBA-Change 00:15:21.586 Unknown (79h): Supported LBA-Change 00:15:21.586 Unknown (7Ah): Supported 00:15:21.586 00:15:21.586 Error Log 00:15:21.586 ========= 00:15:21.586 00:15:21.586 Arbitration 00:15:21.586 =========== 00:15:21.586 Arbitration Burst: 1 00:15:21.586 00:15:21.586 Power Management 00:15:21.586 ================ 00:15:21.586 Number of Power States: 1 00:15:21.586 Current Power State: Power State #0 00:15:21.586 Power State #0: 00:15:21.586 Max Power: 0.00 W 00:15:21.586 Non-Operational State: Operational 00:15:21.586 Entry Latency: Not Reported 00:15:21.586 Exit Latency: Not Reported 00:15:21.586 Relative Read Throughput: 0 00:15:21.586 Relative Read Latency: 0 00:15:21.586 Relative Write Throughput: 0 00:15:21.586 Relative Write Latency: 0 00:15:21.586 Idle Power: Not Reported 00:15:21.586 Active Power: Not Reported 00:15:21.586 Non-Operational Permissive Mode: Not Supported 00:15:21.586 00:15:21.586 Health Information 00:15:21.586 ================== 00:15:21.586 Critical Warnings: 00:15:21.586 Available Spare Space: OK 00:15:21.586 Temperature: OK 00:15:21.586 Device Reliability: OK 00:15:21.586 Read Only: No 00:15:21.586 Volatile Memory Backup: OK 00:15:21.586 Current Temperature: 0 Kelvin[2024-07-24 23:00:53.986278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:21.586 [2024-07-24 23:00:53.986287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:21.586 [2024-07-24 23:00:53.986314] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:21.586 [2024-07-24 23:00:53.986325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.586 [2024-07-24 23:00:53.986333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.586 [2024-07-24 23:00:53.986341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.586 [2024-07-24 23:00:53.986348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.586 [2024-07-24 23:00:53.988724] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:21.586 [2024-07-24 23:00:53.988736] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:21.586 [2024-07-24 23:00:53.989299] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:21.586 [2024-07-24 23:00:53.989307] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:21.586 [2024-07-24 23:00:53.990272] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:21.586 [2024-07-24 23:00:53.990284] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:21.586 [2024-07-24 23:00:53.990331] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:21.586 [2024-07-24 23:00:53.993724] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:21.851 (-273 Celsius) 00:15:21.851 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:21.851 Available Spare: 0% 00:15:21.851 Available Spare Threshold: 0% 00:15:21.851 Life Percentage Used: 0% 00:15:21.851 Data Units Read: 0 00:15:21.851 Data Units Written: 0 00:15:21.851 Host Read Commands: 0 00:15:21.851 Host Write Commands: 0 00:15:21.851 Controller Busy Time: 0 minutes 00:15:21.851 Power Cycles: 0 00:15:21.851 Power On Hours: 0 hours 00:15:21.851 Unsafe Shutdowns: 0 00:15:21.851 Unrecoverable Media Errors: 0 00:15:21.851 Lifetime Error Log Entries: 0 00:15:21.851 Warning Temperature Time: 0 minutes 00:15:21.851 Critical Temperature Time: 0 minutes 00:15:21.851 00:15:21.851 Number of Queues 00:15:21.851 ================ 00:15:21.851 Number of I/O Submission Queues: 127 00:15:21.851 Number of I/O Completion Queues: 127 00:15:21.851 00:15:21.851 Active Namespaces 00:15:21.851 ================= 00:15:21.851 Namespace ID:1 00:15:21.851 Error Recovery Timeout: Unlimited 00:15:21.851 Command Set Identifier: NVM (00h) 00:15:21.851 Deallocate: Supported 00:15:21.851 Deallocated/Unwritten Error: Not Supported 00:15:21.851 Deallocated Read Value: Unknown 00:15:21.851 Deallocate in Write Zeroes: Not Supported 00:15:21.851 Deallocated Guard Field: 0xFFFF 00:15:21.851 Flush: Supported 00:15:21.851 Reservation: Supported 00:15:21.851 Namespace Sharing Capabilities: Multiple Controllers 00:15:21.851 Size (in LBAs): 131072 (0GiB) 00:15:21.851 Capacity (in LBAs): 131072 (0GiB) 00:15:21.851 Utilization (in LBAs): 131072 (0GiB) 00:15:21.851 NGUID: 367D8BA2506C4965B7F68653FD89E6B4 00:15:21.851 UUID: 367d8ba2-506c-4965-b7f6-8653fd89e6b4 00:15:21.851 Thin Provisioning: Not Supported 00:15:21.851 Per-NS Atomic Units: Yes 00:15:21.851 Atomic Boundary Size (Normal): 0 00:15:21.851 Atomic Boundary Size (PFail): 0 00:15:21.851 Atomic Boundary Offset: 0 00:15:21.851 Maximum Single Source Range Length: 65535 00:15:21.851 Maximum Copy Length: 65535 00:15:21.851 Maximum Source Range Count: 1 00:15:21.851 NGUID/EUI64 Never Reused: No 00:15:21.851 Namespace Write Protected: No 00:15:21.851 Number of LBA Formats: 1 00:15:21.851 Current LBA Format: LBA Format #00 00:15:21.851 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:21.851 00:15:21.851 23:00:54 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:21.851 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.136 Initializing NVMe Controllers 00:15:27.136 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:27.136 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:27.136 Initialization complete. Launching workers. 00:15:27.136 ======================================================== 00:15:27.136 Latency(us) 00:15:27.136 Device Information : IOPS MiB/s Average min max 00:15:27.136 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39939.95 156.02 3204.65 914.02 6721.91 00:15:27.136 ======================================================== 00:15:27.136 Total : 39939.95 156.02 3204.65 914.02 6721.91 00:15:27.136 00:15:27.136 23:00:59 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:27.136 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.407 Initializing NVMe Controllers 00:15:32.407 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:32.407 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:32.407 Initialization complete. Launching workers. 00:15:32.407 ======================================================== 00:15:32.407 Latency(us) 00:15:32.407 Device Information : IOPS MiB/s Average min max 00:15:32.407 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16054.59 62.71 7978.21 4983.03 10977.78 00:15:32.407 ======================================================== 00:15:32.407 Total : 16054.59 62.71 7978.21 4983.03 10977.78 00:15:32.407 00:15:32.407 23:01:04 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:32.407 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.750 Initializing NVMe Controllers 00:15:37.750 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:37.750 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:37.750 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:37.750 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:37.750 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:37.750 Initialization complete. Launching workers. 00:15:37.750 Starting thread on core 2 00:15:37.750 Starting thread on core 3 00:15:37.750 Starting thread on core 1 00:15:37.750 23:01:09 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:37.750 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.040 Initializing NVMe Controllers 00:15:41.040 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.040 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.040 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:41.040 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:41.040 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:41.040 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:41.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:41.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:41.040 Initialization complete. Launching workers. 00:15:41.040 Starting thread on core 1 with urgent priority queue 00:15:41.040 Starting thread on core 2 with urgent priority queue 00:15:41.040 Starting thread on core 3 with urgent priority queue 00:15:41.040 Starting thread on core 0 with urgent priority queue 00:15:41.040 SPDK bdev Controller (SPDK1 ) core 0: 9080.67 IO/s 11.01 secs/100000 ios 00:15:41.040 SPDK bdev Controller (SPDK1 ) core 1: 7921.33 IO/s 12.62 secs/100000 ios 00:15:41.040 SPDK bdev Controller (SPDK1 ) core 2: 10221.00 IO/s 9.78 secs/100000 ios 00:15:41.040 SPDK bdev Controller (SPDK1 ) core 3: 7902.67 IO/s 12.65 secs/100000 ios 00:15:41.040 ======================================================== 00:15:41.040 00:15:41.040 23:01:13 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:41.040 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.299 Initializing NVMe Controllers 00:15:41.299 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.299 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.299 Namespace ID: 1 size: 0GB 00:15:41.299 Initialization complete. 00:15:41.299 INFO: using host memory buffer for IO 00:15:41.299 Hello world! 00:15:41.299 23:01:13 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:41.299 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.676 Initializing NVMe Controllers 00:15:42.676 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.676 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.676 Initialization complete. Launching workers. 00:15:42.676 submit (in ns) avg, min, max = 6065.7, 3041.6, 3999608.0 00:15:42.676 complete (in ns) avg, min, max = 18435.4, 1685.6, 3998882.4 00:15:42.676 00:15:42.676 Submit histogram 00:15:42.676 ================ 00:15:42.676 Range in us Cumulative Count 00:15:42.676 3.034 - 3.046: 0.0290% ( 5) 00:15:42.676 3.046 - 3.059: 0.1626% ( 23) 00:15:42.676 3.059 - 3.072: 0.4123% ( 43) 00:15:42.676 3.072 - 3.085: 0.8187% ( 70) 00:15:42.676 3.085 - 3.098: 1.5561% ( 127) 00:15:42.676 3.098 - 3.110: 2.7291% ( 202) 00:15:42.676 3.110 - 3.123: 4.7555% ( 349) 00:15:42.676 3.123 - 3.136: 7.1072% ( 405) 00:15:42.676 3.136 - 3.149: 10.4576% ( 577) 00:15:42.676 3.149 - 3.162: 14.6963% ( 730) 00:15:42.676 3.162 - 3.174: 19.5854% ( 842) 00:15:42.676 3.174 - 3.187: 24.5384% ( 853) 00:15:42.676 3.187 - 3.200: 30.1707% ( 970) 00:15:42.676 3.200 - 3.213: 35.6753% ( 948) 00:15:42.676 3.213 - 3.226: 41.2612% ( 962) 00:15:42.676 3.226 - 3.238: 46.4000% ( 885) 00:15:42.676 3.238 - 3.251: 51.1787% ( 823) 00:15:42.676 3.251 - 3.264: 55.5801% ( 758) 00:15:42.676 3.264 - 3.277: 59.7608% ( 720) 00:15:42.676 3.277 - 3.302: 68.5693% ( 1517) 00:15:42.676 3.302 - 3.328: 75.4036% ( 1177) 00:15:42.676 3.328 - 3.354: 81.9359% ( 1125) 00:15:42.676 3.354 - 3.379: 86.0876% ( 715) 00:15:42.676 3.379 - 3.405: 87.7598% ( 288) 00:15:42.676 3.405 - 3.430: 88.6657% ( 156) 00:15:42.676 3.430 - 3.456: 89.5076% ( 145) 00:15:42.676 3.456 - 3.482: 90.7850% ( 220) 00:15:42.676 3.482 - 3.507: 92.2018% ( 244) 00:15:42.676 3.507 - 3.533: 93.6709% ( 253) 00:15:42.676 3.533 - 3.558: 95.1167% ( 249) 00:15:42.676 3.558 - 3.584: 96.4871% ( 236) 00:15:42.676 3.584 - 3.610: 97.7064% ( 210) 00:15:42.676 3.610 - 3.635: 98.6529% ( 163) 00:15:42.676 3.635 - 3.661: 99.1232% ( 81) 00:15:42.676 3.661 - 3.686: 99.4019% ( 48) 00:15:42.676 3.686 - 3.712: 99.5761% ( 30) 00:15:42.676 3.712 - 3.738: 99.6574% ( 14) 00:15:42.676 3.738 - 3.763: 99.6632% ( 1) 00:15:42.676 3.763 - 3.789: 99.6690% ( 1) 00:15:42.676 3.789 - 3.814: 99.6748% ( 1) 00:15:42.676 4.070 - 4.096: 99.6806% ( 1) 00:15:42.676 5.760 - 5.786: 99.6864% ( 1) 00:15:42.676 5.786 - 5.811: 99.6923% ( 1) 00:15:42.676 5.914 - 5.939: 99.6981% ( 1) 00:15:42.676 5.965 - 5.990: 99.7039% ( 1) 00:15:42.676 6.221 - 6.246: 99.7097% ( 1) 00:15:42.676 6.298 - 6.323: 99.7155% ( 1) 00:15:42.676 6.374 - 6.400: 99.7213% ( 1) 00:15:42.676 6.400 - 6.426: 99.7329% ( 2) 00:15:42.676 6.477 - 6.502: 99.7387% ( 1) 00:15:42.676 6.554 - 6.605: 99.7561% ( 3) 00:15:42.676 6.605 - 6.656: 99.7735% ( 3) 00:15:42.676 6.656 - 6.707: 99.7852% ( 2) 00:15:42.676 6.758 - 6.810: 99.8084% ( 4) 00:15:42.676 6.861 - 6.912: 99.8142% ( 1) 00:15:42.676 7.014 - 7.066: 99.8316% ( 3) 00:15:42.676 7.066 - 7.117: 99.8432% ( 2) 00:15:42.676 7.168 - 7.219: 99.8490% ( 1) 00:15:42.676 7.219 - 7.270: 99.8606% ( 2) 00:15:42.676 7.270 - 7.322: 99.8723% ( 2) 00:15:42.676 7.373 - 7.424: 99.8781% ( 1) 00:15:42.676 7.475 - 7.526: 99.8955% ( 3) 00:15:42.676 7.680 - 7.731: 99.9013% ( 1) 00:15:42.676 7.885 - 7.936: 99.9071% ( 1) 00:15:42.676 8.243 - 8.294: 99.9129% ( 1) 00:15:42.676 8.858 - 8.909: 99.9187% ( 1) 00:15:42.676 10.496 - 10.547: 99.9245% ( 1) 00:15:42.676 11.520 - 11.571: 99.9303% ( 1) 00:15:42.676 3984.589 - 4010.803: 100.0000% ( 12) 00:15:42.676 00:15:42.676 Complete histogram 00:15:42.676 ================== 00:15:42.676 Range in us Cumulative Count 00:15:42.676 1.677 - 1.690: 0.0116% ( 2) 00:15:42.676 1.690 - 1.702: 1.9568% ( 335) 00:15:42.676 1.702 - 1.715: 17.7506% ( 2720) 00:15:42.676 1.715 - 1.728: 38.0211% ( 3491) 00:15:42.676 1.728 - 1.741: 45.4941% ( 1287) 00:15:42.676 1.741 - 1.754: 54.9762% ( 1633) 00:15:42.676 1.754 - 1.766: 76.9307% ( 3781) 00:15:42.676 1.766 - 1.779: 92.4631% ( 2675) 00:15:42.676 1.779 - 1.792: 97.2361% ( 822) 00:15:42.676 1.792 - 1.805: 98.6587% ( 245) 00:15:42.676 1.805 - 1.818: 99.1348% ( 82) 00:15:42.676 1.818 - 1.830: 99.2103% ( 13) 00:15:42.676 1.830 - 1.843: 99.2568% ( 8) 00:15:42.676 1.843 - 1.856: 99.2800% ( 4) 00:15:42.676 1.856 - 1.869: 99.2916% ( 2) 00:15:42.676 1.869 - 1.882: 99.3148% ( 4) 00:15:42.676 1.882 - 1.894: 99.3787% ( 11) 00:15:42.676 1.894 - 1.907: 99.4019% ( 4) 00:15:42.676 1.907 - 1.920: 99.4077% ( 1) 00:15:42.676 1.920 - 1.933: 99.4252% ( 3) 00:15:42.676 1.933 - 1.946: 99.4368% ( 2) 00:15:42.676 1.946 - 1.958: 99.4426% ( 1) 00:15:42.676 1.997 - 2.010: 99.4484% ( 1) 00:15:42.676 2.010 - 2.022: 99.4542% ( 1) 00:15:42.676 4.352 - 4.378: 99.4600% ( 1) 00:15:42.676 4.506 - 4.531: 99.4658% ( 1) 00:15:42.676 4.557 - 4.582: 99.4716% ( 1) 00:15:42.676 4.941 - 4.966: 99.4774% ( 1) 00:15:42.676 5.299 - 5.325: 99.4890% ( 2) 00:15:42.676 5.402 - 5.427: 99.5006% ( 2) 00:15:42.676 5.504 - 5.530: 99.5064% ( 1) 00:15:42.676 5.606 - 5.632: 99.5123% ( 1) 00:15:42.677 5.632 - 5.658: 99.5181% ( 1) 00:15:42.677 5.786 - 5.811: 99.5239% ( 1) 00:15:42.677 5.862 - 5.888: 99.5297% ( 1) 00:15:42.677 6.093 - 6.118: 99.5355% ( 1) 00:15:42.677 6.144 - 6.170: 99.5413% ( 1) 00:15:42.677 6.170 - 6.195: 99.5471% ( 1) 00:15:42.677 6.451 - 6.477: 99.5529% ( 1) 00:15:42.677 6.707 - 6.758: 99.5587% ( 1) 00:15:42.677 7.270 - 7.322: 99.5645% ( 1) 00:15:42.677 7.578 - 7.629: 99.5703% ( 1) 00:15:42.677 9.933 - 9.984: 99.5761% ( 1) 00:15:42.677 13.926 - 14.029: 99.5819% ( 1) 00:15:42.677 3827.302 - 3853.517: 99.5877% ( 1) 00:15:42.677 3853.517 - 3879.731: 99.5935% ( 1) 00:15:42.677 3984.589 - 4010.803: 100.0000% ( 70) 00:15:42.677 00:15:42.677 23:01:14 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:42.677 23:01:14 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:42.677 23:01:14 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:42.677 23:01:14 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:42.677 23:01:14 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:42.677 [2024-07-24 23:01:14.991224] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:42.677 [ 00:15:42.677 { 00:15:42.677 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:42.677 "subtype": "Discovery", 00:15:42.677 "listen_addresses": [], 00:15:42.677 "allow_any_host": true, 00:15:42.677 "hosts": [] 00:15:42.677 }, 00:15:42.677 { 00:15:42.677 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:42.677 "subtype": "NVMe", 00:15:42.677 "listen_addresses": [ 00:15:42.677 { 00:15:42.677 "transport": "VFIOUSER", 00:15:42.677 "trtype": "VFIOUSER", 00:15:42.677 "adrfam": "IPv4", 00:15:42.677 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:42.677 "trsvcid": "0" 00:15:42.677 } 00:15:42.677 ], 00:15:42.677 "allow_any_host": true, 00:15:42.677 "hosts": [], 00:15:42.677 "serial_number": "SPDK1", 00:15:42.677 "model_number": "SPDK bdev Controller", 00:15:42.677 "max_namespaces": 32, 00:15:42.677 "min_cntlid": 1, 00:15:42.677 "max_cntlid": 65519, 00:15:42.677 "namespaces": [ 00:15:42.677 { 00:15:42.677 "nsid": 1, 00:15:42.677 "bdev_name": "Malloc1", 00:15:42.677 "name": "Malloc1", 00:15:42.677 "nguid": "367D8BA2506C4965B7F68653FD89E6B4", 00:15:42.677 "uuid": "367d8ba2-506c-4965-b7f6-8653fd89e6b4" 00:15:42.677 } 00:15:42.677 ] 00:15:42.677 }, 00:15:42.677 { 00:15:42.677 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:42.677 "subtype": "NVMe", 00:15:42.677 "listen_addresses": [ 00:15:42.677 { 00:15:42.677 "transport": "VFIOUSER", 00:15:42.677 "trtype": "VFIOUSER", 00:15:42.677 "adrfam": "IPv4", 00:15:42.677 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:42.677 "trsvcid": "0" 00:15:42.677 } 00:15:42.677 ], 00:15:42.677 "allow_any_host": true, 00:15:42.677 "hosts": [], 00:15:42.677 "serial_number": "SPDK2", 00:15:42.677 "model_number": "SPDK bdev Controller", 00:15:42.677 "max_namespaces": 32, 00:15:42.677 "min_cntlid": 1, 00:15:42.677 "max_cntlid": 65519, 00:15:42.677 "namespaces": [ 00:15:42.677 { 00:15:42.677 "nsid": 1, 00:15:42.677 "bdev_name": "Malloc2", 00:15:42.677 "name": "Malloc2", 00:15:42.677 "nguid": "73BFCD3631C5480D91CB5DB8E80600E9", 00:15:42.677 "uuid": "73bfcd36-31c5-480d-91cb-5db8e80600e9" 00:15:42.677 } 00:15:42.677 ] 00:15:42.677 } 00:15:42.677 ] 00:15:42.677 23:01:15 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:42.677 23:01:15 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3170743 00:15:42.677 23:01:15 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:42.677 23:01:15 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:42.677 23:01:15 -- common/autotest_common.sh@1244 -- # local i=0 00:15:42.677 23:01:15 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:42.677 23:01:15 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:42.677 23:01:15 -- common/autotest_common.sh@1255 -- # return 0 00:15:42.677 23:01:15 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:42.677 23:01:15 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:42.677 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.936 Malloc3 00:15:42.936 23:01:15 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:43.195 23:01:15 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:43.195 Asynchronous Event Request test 00:15:43.195 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:43.195 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:43.195 Registering asynchronous event callbacks... 00:15:43.195 Starting namespace attribute notice tests for all controllers... 00:15:43.195 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:43.195 aer_cb - Changed Namespace 00:15:43.195 Cleaning up... 00:15:43.195 [ 00:15:43.195 { 00:15:43.195 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:43.195 "subtype": "Discovery", 00:15:43.195 "listen_addresses": [], 00:15:43.195 "allow_any_host": true, 00:15:43.195 "hosts": [] 00:15:43.195 }, 00:15:43.195 { 00:15:43.195 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:43.195 "subtype": "NVMe", 00:15:43.195 "listen_addresses": [ 00:15:43.195 { 00:15:43.195 "transport": "VFIOUSER", 00:15:43.195 "trtype": "VFIOUSER", 00:15:43.195 "adrfam": "IPv4", 00:15:43.195 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:43.195 "trsvcid": "0" 00:15:43.195 } 00:15:43.195 ], 00:15:43.195 "allow_any_host": true, 00:15:43.195 "hosts": [], 00:15:43.195 "serial_number": "SPDK1", 00:15:43.195 "model_number": "SPDK bdev Controller", 00:15:43.195 "max_namespaces": 32, 00:15:43.195 "min_cntlid": 1, 00:15:43.195 "max_cntlid": 65519, 00:15:43.195 "namespaces": [ 00:15:43.195 { 00:15:43.195 "nsid": 1, 00:15:43.195 "bdev_name": "Malloc1", 00:15:43.195 "name": "Malloc1", 00:15:43.195 "nguid": "367D8BA2506C4965B7F68653FD89E6B4", 00:15:43.195 "uuid": "367d8ba2-506c-4965-b7f6-8653fd89e6b4" 00:15:43.195 }, 00:15:43.195 { 00:15:43.195 "nsid": 2, 00:15:43.195 "bdev_name": "Malloc3", 00:15:43.195 "name": "Malloc3", 00:15:43.195 "nguid": "5FE58D202B4B4AA8919F094EC0D7A285", 00:15:43.195 "uuid": "5fe58d20-2b4b-4aa8-919f-094ec0d7a285" 00:15:43.195 } 00:15:43.195 ] 00:15:43.195 }, 00:15:43.195 { 00:15:43.195 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:43.195 "subtype": "NVMe", 00:15:43.195 "listen_addresses": [ 00:15:43.195 { 00:15:43.195 "transport": "VFIOUSER", 00:15:43.195 "trtype": "VFIOUSER", 00:15:43.195 "adrfam": "IPv4", 00:15:43.195 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:43.195 "trsvcid": "0" 00:15:43.195 } 00:15:43.195 ], 00:15:43.195 "allow_any_host": true, 00:15:43.195 "hosts": [], 00:15:43.195 "serial_number": "SPDK2", 00:15:43.195 "model_number": "SPDK bdev Controller", 00:15:43.195 "max_namespaces": 32, 00:15:43.195 "min_cntlid": 1, 00:15:43.195 "max_cntlid": 65519, 00:15:43.195 "namespaces": [ 00:15:43.195 { 00:15:43.195 "nsid": 1, 00:15:43.195 "bdev_name": "Malloc2", 00:15:43.195 "name": "Malloc2", 00:15:43.195 "nguid": "73BFCD3631C5480D91CB5DB8E80600E9", 00:15:43.195 "uuid": "73bfcd36-31c5-480d-91cb-5db8e80600e9" 00:15:43.195 } 00:15:43.195 ] 00:15:43.195 } 00:15:43.195 ] 00:15:43.195 23:01:15 -- target/nvmf_vfio_user.sh@44 -- # wait 3170743 00:15:43.195 23:01:15 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:43.195 23:01:15 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:43.195 23:01:15 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:43.195 23:01:15 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:43.195 [2024-07-24 23:01:15.561188] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:43.196 [2024-07-24 23:01:15.561221] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3170753 ] 00:15:43.196 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.196 [2024-07-24 23:01:15.589900] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:43.196 [2024-07-24 23:01:15.601515] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:43.196 [2024-07-24 23:01:15.601537] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fedb7e8a000 00:15:43.196 [2024-07-24 23:01:15.602520] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.196 [2024-07-24 23:01:15.603521] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.196 [2024-07-24 23:01:15.604525] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.196 [2024-07-24 23:01:15.605531] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:43.196 [2024-07-24 23:01:15.606530] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:43.196 [2024-07-24 23:01:15.607539] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.196 [2024-07-24 23:01:15.608551] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:43.196 [2024-07-24 23:01:15.609553] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.196 [2024-07-24 23:01:15.610563] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:43.196 [2024-07-24 23:01:15.610576] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fedb6c50000 00:15:43.196 [2024-07-24 23:01:15.611468] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:43.196 [2024-07-24 23:01:15.622801] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:43.196 [2024-07-24 23:01:15.622824] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:43.457 [2024-07-24 23:01:15.627907] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:43.457 [2024-07-24 23:01:15.627942] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:43.457 [2024-07-24 23:01:15.628009] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:43.457 [2024-07-24 23:01:15.628027] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:43.457 [2024-07-24 23:01:15.628033] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:43.457 [2024-07-24 23:01:15.628909] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:43.457 [2024-07-24 23:01:15.628920] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:43.457 [2024-07-24 23:01:15.628928] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:43.457 [2024-07-24 23:01:15.629914] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:43.457 [2024-07-24 23:01:15.629924] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:43.457 [2024-07-24 23:01:15.629933] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:43.457 [2024-07-24 23:01:15.630919] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:43.457 [2024-07-24 23:01:15.630930] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:43.457 [2024-07-24 23:01:15.631929] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:43.457 [2024-07-24 23:01:15.631939] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:43.457 [2024-07-24 23:01:15.631946] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:43.457 [2024-07-24 23:01:15.631954] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:43.457 [2024-07-24 23:01:15.632063] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:43.457 [2024-07-24 23:01:15.632069] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:43.457 [2024-07-24 23:01:15.632076] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:43.457 [2024-07-24 23:01:15.632932] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:43.457 [2024-07-24 23:01:15.633936] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:43.457 [2024-07-24 23:01:15.634946] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:43.457 [2024-07-24 23:01:15.635970] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:43.457 [2024-07-24 23:01:15.636958] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:43.457 [2024-07-24 23:01:15.636967] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:43.457 [2024-07-24 23:01:15.636974] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:43.457 [2024-07-24 23:01:15.636992] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:43.457 [2024-07-24 23:01:15.637002] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:43.458 [2024-07-24 23:01:15.637013] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:43.458 [2024-07-24 23:01:15.637020] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.458 [2024-07-24 23:01:15.637033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.458 [2024-07-24 23:01:15.645723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:43.458 [2024-07-24 23:01:15.645737] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:43.458 [2024-07-24 23:01:15.645743] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:43.458 [2024-07-24 23:01:15.645749] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:43.458 [2024-07-24 23:01:15.645755] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:43.458 [2024-07-24 23:01:15.645762] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:43.458 [2024-07-24 23:01:15.645768] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:43.458 [2024-07-24 23:01:15.645774] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:43.458 [2024-07-24 23:01:15.645785] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:43.458 [2024-07-24 23:01:15.645796] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:43.458 [2024-07-24 23:01:15.653722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:43.458 [2024-07-24 23:01:15.653738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.458 [2024-07-24 23:01:15.653747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.458 [2024-07-24 23:01:15.653756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.458 [2024-07-24 23:01:15.653765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.458 [2024-07-24 23:01:15.653771] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:43.458 [2024-07-24 23:01:15.653781] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:43.458 [2024-07-24 23:01:15.653791] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:43.458 [2024-07-24 23:01:15.661721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:43.458 [2024-07-24 23:01:15.661731] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:43.458 [2024-07-24 23:01:15.661737] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:43.458 [2024-07-24 23:01:15.661746] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:43.458 [2024-07-24 23:01:15.661755] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:43.458 [2024-07-24 23:01:15.661764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:43.458 [2024-07-24 23:01:15.669751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:43.458 [2024-07-24 23:01:15.669802] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:43.458 [2024-07-24 23:01:15.669811] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:43.458 [2024-07-24 23:01:15.669820] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:43.458 [2024-07-24 23:01:15.669826] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:43.458 [2024-07-24 23:01:15.669833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:43.458 [2024-07-24 23:01:15.677722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:43.458 [2024-07-24 23:01:15.677736] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:43.458 [2024-07-24 23:01:15.677750] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:43.458 [2024-07-24 23:01:15.677759] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:43.458 [2024-07-24 23:01:15.677767] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:43.458 [2024-07-24 23:01:15.677775] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.458 [2024-07-24 23:01:15.677782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.458 [2024-07-24 23:01:15.685721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:43.458 [2024-07-24 23:01:15.685739] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:43.458 [2024-07-24 23:01:15.685748] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:43.458 [2024-07-24 23:01:15.685756] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:43.458 [2024-07-24 23:01:15.685762] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.458 [2024-07-24 23:01:15.685769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.458 [2024-07-24 23:01:15.693722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:43.458 [2024-07-24 23:01:15.693733] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:43.458 [2024-07-24 23:01:15.693742] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:43.458 [2024-07-24 23:01:15.693751] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:43.458 [2024-07-24 23:01:15.693758] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:43.458 [2024-07-24 23:01:15.693765] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:43.458 [2024-07-24 23:01:15.693771] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:43.458 [2024-07-24 23:01:15.693777] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:43.458 [2024-07-24 23:01:15.693783] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:43.458 [2024-07-24 23:01:15.693801] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:43.458 [2024-07-24 23:01:15.701720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:43.458 [2024-07-24 23:01:15.701735] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:43.458 [2024-07-24 23:01:15.709722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:43.458 [2024-07-24 23:01:15.709737] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:43.458 [2024-07-24 23:01:15.717722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:43.458 [2024-07-24 23:01:15.717737] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:43.458 [2024-07-24 23:01:15.725722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:43.458 [2024-07-24 23:01:15.725737] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:43.458 [2024-07-24 23:01:15.725746] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:43.458 [2024-07-24 23:01:15.725751] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:43.458 [2024-07-24 23:01:15.725756] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:43.458 [2024-07-24 23:01:15.725763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:43.458 [2024-07-24 23:01:15.725771] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:43.458 [2024-07-24 23:01:15.725778] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:43.458 [2024-07-24 23:01:15.725784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:43.458 [2024-07-24 23:01:15.725792] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:43.458 [2024-07-24 23:01:15.725798] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.458 [2024-07-24 23:01:15.725805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.459 [2024-07-24 23:01:15.725814] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:43.459 [2024-07-24 23:01:15.725820] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:43.459 [2024-07-24 23:01:15.725826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:43.459 [2024-07-24 23:01:15.733723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:43.459 [2024-07-24 23:01:15.733745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:43.459 [2024-07-24 23:01:15.733757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:43.459 [2024-07-24 23:01:15.733765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:43.459 ===================================================== 00:15:43.459 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:43.459 ===================================================== 00:15:43.459 Controller Capabilities/Features 00:15:43.459 ================================ 00:15:43.459 Vendor ID: 4e58 00:15:43.459 Subsystem Vendor ID: 4e58 00:15:43.459 Serial Number: SPDK2 00:15:43.459 Model Number: SPDK bdev Controller 00:15:43.459 Firmware Version: 24.01.1 00:15:43.459 Recommended Arb Burst: 6 00:15:43.459 IEEE OUI Identifier: 8d 6b 50 00:15:43.459 Multi-path I/O 00:15:43.459 May have multiple subsystem ports: Yes 00:15:43.459 May have multiple controllers: Yes 00:15:43.459 Associated with SR-IOV VF: No 00:15:43.459 Max Data Transfer Size: 131072 00:15:43.459 Max Number of Namespaces: 32 00:15:43.459 Max Number of I/O Queues: 127 00:15:43.459 NVMe Specification Version (VS): 1.3 00:15:43.459 NVMe Specification Version (Identify): 1.3 00:15:43.459 Maximum Queue Entries: 256 00:15:43.459 Contiguous Queues Required: Yes 00:15:43.459 Arbitration Mechanisms Supported 00:15:43.459 Weighted Round Robin: Not Supported 00:15:43.459 Vendor Specific: Not Supported 00:15:43.459 Reset Timeout: 15000 ms 00:15:43.459 Doorbell Stride: 4 bytes 00:15:43.459 NVM Subsystem Reset: Not Supported 00:15:43.459 Command Sets Supported 00:15:43.459 NVM Command Set: Supported 00:15:43.459 Boot Partition: Not Supported 00:15:43.459 Memory Page Size Minimum: 4096 bytes 00:15:43.459 Memory Page Size Maximum: 4096 bytes 00:15:43.459 Persistent Memory Region: Not Supported 00:15:43.459 Optional Asynchronous Events Supported 00:15:43.459 Namespace Attribute Notices: Supported 00:15:43.459 Firmware Activation Notices: Not Supported 00:15:43.459 ANA Change Notices: Not Supported 00:15:43.459 PLE Aggregate Log Change Notices: Not Supported 00:15:43.459 LBA Status Info Alert Notices: Not Supported 00:15:43.459 EGE Aggregate Log Change Notices: Not Supported 00:15:43.459 Normal NVM Subsystem Shutdown event: Not Supported 00:15:43.459 Zone Descriptor Change Notices: Not Supported 00:15:43.459 Discovery Log Change Notices: Not Supported 00:15:43.459 Controller Attributes 00:15:43.459 128-bit Host Identifier: Supported 00:15:43.459 Non-Operational Permissive Mode: Not Supported 00:15:43.459 NVM Sets: Not Supported 00:15:43.459 Read Recovery Levels: Not Supported 00:15:43.459 Endurance Groups: Not Supported 00:15:43.459 Predictable Latency Mode: Not Supported 00:15:43.459 Traffic Based Keep ALive: Not Supported 00:15:43.459 Namespace Granularity: Not Supported 00:15:43.459 SQ Associations: Not Supported 00:15:43.459 UUID List: Not Supported 00:15:43.459 Multi-Domain Subsystem: Not Supported 00:15:43.459 Fixed Capacity Management: Not Supported 00:15:43.459 Variable Capacity Management: Not Supported 00:15:43.459 Delete Endurance Group: Not Supported 00:15:43.459 Delete NVM Set: Not Supported 00:15:43.459 Extended LBA Formats Supported: Not Supported 00:15:43.459 Flexible Data Placement Supported: Not Supported 00:15:43.459 00:15:43.459 Controller Memory Buffer Support 00:15:43.459 ================================ 00:15:43.459 Supported: No 00:15:43.459 00:15:43.459 Persistent Memory Region Support 00:15:43.459 ================================ 00:15:43.459 Supported: No 00:15:43.459 00:15:43.459 Admin Command Set Attributes 00:15:43.459 ============================ 00:15:43.459 Security Send/Receive: Not Supported 00:15:43.459 Format NVM: Not Supported 00:15:43.459 Firmware Activate/Download: Not Supported 00:15:43.459 Namespace Management: Not Supported 00:15:43.459 Device Self-Test: Not Supported 00:15:43.459 Directives: Not Supported 00:15:43.459 NVMe-MI: Not Supported 00:15:43.459 Virtualization Management: Not Supported 00:15:43.459 Doorbell Buffer Config: Not Supported 00:15:43.459 Get LBA Status Capability: Not Supported 00:15:43.459 Command & Feature Lockdown Capability: Not Supported 00:15:43.459 Abort Command Limit: 4 00:15:43.459 Async Event Request Limit: 4 00:15:43.459 Number of Firmware Slots: N/A 00:15:43.459 Firmware Slot 1 Read-Only: N/A 00:15:43.459 Firmware Activation Without Reset: N/A 00:15:43.459 Multiple Update Detection Support: N/A 00:15:43.459 Firmware Update Granularity: No Information Provided 00:15:43.459 Per-Namespace SMART Log: No 00:15:43.459 Asymmetric Namespace Access Log Page: Not Supported 00:15:43.459 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:43.459 Command Effects Log Page: Supported 00:15:43.459 Get Log Page Extended Data: Supported 00:15:43.459 Telemetry Log Pages: Not Supported 00:15:43.459 Persistent Event Log Pages: Not Supported 00:15:43.459 Supported Log Pages Log Page: May Support 00:15:43.459 Commands Supported & Effects Log Page: Not Supported 00:15:43.459 Feature Identifiers & Effects Log Page:May Support 00:15:43.459 NVMe-MI Commands & Effects Log Page: May Support 00:15:43.459 Data Area 4 for Telemetry Log: Not Supported 00:15:43.459 Error Log Page Entries Supported: 128 00:15:43.459 Keep Alive: Supported 00:15:43.459 Keep Alive Granularity: 10000 ms 00:15:43.459 00:15:43.459 NVM Command Set Attributes 00:15:43.459 ========================== 00:15:43.459 Submission Queue Entry Size 00:15:43.459 Max: 64 00:15:43.459 Min: 64 00:15:43.459 Completion Queue Entry Size 00:15:43.459 Max: 16 00:15:43.459 Min: 16 00:15:43.459 Number of Namespaces: 32 00:15:43.459 Compare Command: Supported 00:15:43.459 Write Uncorrectable Command: Not Supported 00:15:43.459 Dataset Management Command: Supported 00:15:43.459 Write Zeroes Command: Supported 00:15:43.459 Set Features Save Field: Not Supported 00:15:43.459 Reservations: Not Supported 00:15:43.459 Timestamp: Not Supported 00:15:43.459 Copy: Supported 00:15:43.459 Volatile Write Cache: Present 00:15:43.459 Atomic Write Unit (Normal): 1 00:15:43.459 Atomic Write Unit (PFail): 1 00:15:43.459 Atomic Compare & Write Unit: 1 00:15:43.459 Fused Compare & Write: Supported 00:15:43.459 Scatter-Gather List 00:15:43.459 SGL Command Set: Supported (Dword aligned) 00:15:43.459 SGL Keyed: Not Supported 00:15:43.459 SGL Bit Bucket Descriptor: Not Supported 00:15:43.459 SGL Metadata Pointer: Not Supported 00:15:43.459 Oversized SGL: Not Supported 00:15:43.459 SGL Metadata Address: Not Supported 00:15:43.459 SGL Offset: Not Supported 00:15:43.459 Transport SGL Data Block: Not Supported 00:15:43.459 Replay Protected Memory Block: Not Supported 00:15:43.459 00:15:43.459 Firmware Slot Information 00:15:43.459 ========================= 00:15:43.459 Active slot: 1 00:15:43.459 Slot 1 Firmware Revision: 24.01.1 00:15:43.459 00:15:43.459 00:15:43.459 Commands Supported and Effects 00:15:43.459 ============================== 00:15:43.459 Admin Commands 00:15:43.459 -------------- 00:15:43.459 Get Log Page (02h): Supported 00:15:43.459 Identify (06h): Supported 00:15:43.459 Abort (08h): Supported 00:15:43.459 Set Features (09h): Supported 00:15:43.459 Get Features (0Ah): Supported 00:15:43.459 Asynchronous Event Request (0Ch): Supported 00:15:43.459 Keep Alive (18h): Supported 00:15:43.459 I/O Commands 00:15:43.459 ------------ 00:15:43.459 Flush (00h): Supported LBA-Change 00:15:43.459 Write (01h): Supported LBA-Change 00:15:43.459 Read (02h): Supported 00:15:43.460 Compare (05h): Supported 00:15:43.460 Write Zeroes (08h): Supported LBA-Change 00:15:43.460 Dataset Management (09h): Supported LBA-Change 00:15:43.460 Copy (19h): Supported LBA-Change 00:15:43.460 Unknown (79h): Supported LBA-Change 00:15:43.460 Unknown (7Ah): Supported 00:15:43.460 00:15:43.460 Error Log 00:15:43.460 ========= 00:15:43.460 00:15:43.460 Arbitration 00:15:43.460 =========== 00:15:43.460 Arbitration Burst: 1 00:15:43.460 00:15:43.460 Power Management 00:15:43.460 ================ 00:15:43.460 Number of Power States: 1 00:15:43.460 Current Power State: Power State #0 00:15:43.460 Power State #0: 00:15:43.460 Max Power: 0.00 W 00:15:43.460 Non-Operational State: Operational 00:15:43.460 Entry Latency: Not Reported 00:15:43.460 Exit Latency: Not Reported 00:15:43.460 Relative Read Throughput: 0 00:15:43.460 Relative Read Latency: 0 00:15:43.460 Relative Write Throughput: 0 00:15:43.460 Relative Write Latency: 0 00:15:43.460 Idle Power: Not Reported 00:15:43.460 Active Power: Not Reported 00:15:43.460 Non-Operational Permissive Mode: Not Supported 00:15:43.460 00:15:43.460 Health Information 00:15:43.460 ================== 00:15:43.460 Critical Warnings: 00:15:43.460 Available Spare Space: OK 00:15:43.460 Temperature: OK 00:15:43.460 Device Reliability: OK 00:15:43.460 Read Only: No 00:15:43.460 Volatile Memory Backup: OK 00:15:43.460 Current Temperature: 0 Kelvin[2024-07-24 23:01:15.733860] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:43.460 [2024-07-24 23:01:15.741722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:43.460 [2024-07-24 23:01:15.741752] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:43.460 [2024-07-24 23:01:15.741762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.460 [2024-07-24 23:01:15.741770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.460 [2024-07-24 23:01:15.741778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.460 [2024-07-24 23:01:15.741785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.460 [2024-07-24 23:01:15.741831] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:43.460 [2024-07-24 23:01:15.741843] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:43.460 [2024-07-24 23:01:15.742865] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:43.460 [2024-07-24 23:01:15.742876] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:43.460 [2024-07-24 23:01:15.743844] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:43.460 [2024-07-24 23:01:15.743858] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:43.460 [2024-07-24 23:01:15.743905] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:43.460 [2024-07-24 23:01:15.744857] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:43.460 (-273 Celsius) 00:15:43.460 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:43.460 Available Spare: 0% 00:15:43.460 Available Spare Threshold: 0% 00:15:43.460 Life Percentage Used: 0% 00:15:43.460 Data Units Read: 0 00:15:43.460 Data Units Written: 0 00:15:43.460 Host Read Commands: 0 00:15:43.460 Host Write Commands: 0 00:15:43.460 Controller Busy Time: 0 minutes 00:15:43.460 Power Cycles: 0 00:15:43.460 Power On Hours: 0 hours 00:15:43.460 Unsafe Shutdowns: 0 00:15:43.460 Unrecoverable Media Errors: 0 00:15:43.460 Lifetime Error Log Entries: 0 00:15:43.460 Warning Temperature Time: 0 minutes 00:15:43.460 Critical Temperature Time: 0 minutes 00:15:43.460 00:15:43.460 Number of Queues 00:15:43.460 ================ 00:15:43.460 Number of I/O Submission Queues: 127 00:15:43.460 Number of I/O Completion Queues: 127 00:15:43.460 00:15:43.460 Active Namespaces 00:15:43.460 ================= 00:15:43.460 Namespace ID:1 00:15:43.460 Error Recovery Timeout: Unlimited 00:15:43.460 Command Set Identifier: NVM (00h) 00:15:43.460 Deallocate: Supported 00:15:43.460 Deallocated/Unwritten Error: Not Supported 00:15:43.460 Deallocated Read Value: Unknown 00:15:43.460 Deallocate in Write Zeroes: Not Supported 00:15:43.460 Deallocated Guard Field: 0xFFFF 00:15:43.460 Flush: Supported 00:15:43.460 Reservation: Supported 00:15:43.460 Namespace Sharing Capabilities: Multiple Controllers 00:15:43.460 Size (in LBAs): 131072 (0GiB) 00:15:43.460 Capacity (in LBAs): 131072 (0GiB) 00:15:43.460 Utilization (in LBAs): 131072 (0GiB) 00:15:43.460 NGUID: 73BFCD3631C5480D91CB5DB8E80600E9 00:15:43.460 UUID: 73bfcd36-31c5-480d-91cb-5db8e80600e9 00:15:43.460 Thin Provisioning: Not Supported 00:15:43.460 Per-NS Atomic Units: Yes 00:15:43.460 Atomic Boundary Size (Normal): 0 00:15:43.460 Atomic Boundary Size (PFail): 0 00:15:43.460 Atomic Boundary Offset: 0 00:15:43.460 Maximum Single Source Range Length: 65535 00:15:43.460 Maximum Copy Length: 65535 00:15:43.460 Maximum Source Range Count: 1 00:15:43.460 NGUID/EUI64 Never Reused: No 00:15:43.460 Namespace Write Protected: No 00:15:43.460 Number of LBA Formats: 1 00:15:43.460 Current LBA Format: LBA Format #00 00:15:43.460 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:43.460 00:15:43.460 23:01:15 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:43.460 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.736 Initializing NVMe Controllers 00:15:48.736 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:48.736 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:48.736 Initialization complete. Launching workers. 00:15:48.736 ======================================================== 00:15:48.736 Latency(us) 00:15:48.736 Device Information : IOPS MiB/s Average min max 00:15:48.736 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39954.40 156.07 3203.67 920.60 9697.01 00:15:48.736 ======================================================== 00:15:48.736 Total : 39954.40 156.07 3203.67 920.60 9697.01 00:15:48.736 00:15:48.736 23:01:21 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:48.736 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.010 Initializing NVMe Controllers 00:15:54.010 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:54.010 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:54.010 Initialization complete. Launching workers. 00:15:54.010 ======================================================== 00:15:54.010 Latency(us) 00:15:54.010 Device Information : IOPS MiB/s Average min max 00:15:54.010 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39972.76 156.14 3203.36 900.52 7686.30 00:15:54.010 ======================================================== 00:15:54.010 Total : 39972.76 156.14 3203.36 900.52 7686.30 00:15:54.010 00:15:54.010 23:01:26 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:54.010 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.314 Initializing NVMe Controllers 00:15:59.314 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:59.314 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:59.314 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:59.314 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:59.314 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:59.314 Initialization complete. Launching workers. 00:15:59.314 Starting thread on core 2 00:15:59.314 Starting thread on core 3 00:15:59.314 Starting thread on core 1 00:15:59.315 23:01:31 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:59.574 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.863 Initializing NVMe Controllers 00:16:02.863 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:02.863 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:02.863 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:02.863 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:02.863 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:02.863 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:02.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:02.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:02.863 Initialization complete. Launching workers. 00:16:02.863 Starting thread on core 1 with urgent priority queue 00:16:02.864 Starting thread on core 2 with urgent priority queue 00:16:02.864 Starting thread on core 3 with urgent priority queue 00:16:02.864 Starting thread on core 0 with urgent priority queue 00:16:02.864 SPDK bdev Controller (SPDK2 ) core 0: 9874.00 IO/s 10.13 secs/100000 ios 00:16:02.864 SPDK bdev Controller (SPDK2 ) core 1: 8022.00 IO/s 12.47 secs/100000 ios 00:16:02.864 SPDK bdev Controller (SPDK2 ) core 2: 7997.00 IO/s 12.50 secs/100000 ios 00:16:02.864 SPDK bdev Controller (SPDK2 ) core 3: 8626.00 IO/s 11.59 secs/100000 ios 00:16:02.864 ======================================================== 00:16:02.864 00:16:02.864 23:01:35 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:02.864 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.122 Initializing NVMe Controllers 00:16:03.122 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:03.122 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:03.122 Namespace ID: 1 size: 0GB 00:16:03.122 Initialization complete. 00:16:03.122 INFO: using host memory buffer for IO 00:16:03.122 Hello world! 00:16:03.122 23:01:35 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:03.122 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.499 Initializing NVMe Controllers 00:16:04.499 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.499 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.499 Initialization complete. Launching workers. 00:16:04.499 submit (in ns) avg, min, max = 7053.1, 3040.8, 4994771.2 00:16:04.499 complete (in ns) avg, min, max = 21130.0, 1668.0, 7229327.2 00:16:04.499 00:16:04.499 Submit histogram 00:16:04.499 ================ 00:16:04.499 Range in us Cumulative Count 00:16:04.499 3.034 - 3.046: 0.0232% ( 4) 00:16:04.499 3.046 - 3.059: 0.0349% ( 2) 00:16:04.499 3.059 - 3.072: 0.1104% ( 13) 00:16:04.499 3.072 - 3.085: 0.5056% ( 68) 00:16:04.499 3.085 - 3.098: 1.3134% ( 139) 00:16:04.499 3.098 - 3.110: 2.7779% ( 252) 00:16:04.499 3.110 - 3.123: 4.9399% ( 372) 00:16:04.499 3.123 - 3.136: 7.6829% ( 472) 00:16:04.499 3.136 - 3.149: 11.8150% ( 711) 00:16:04.499 3.149 - 3.162: 16.6095% ( 825) 00:16:04.499 3.162 - 3.174: 22.0782% ( 941) 00:16:04.499 3.174 - 3.187: 27.4539% ( 925) 00:16:04.499 3.187 - 3.200: 33.4864% ( 1038) 00:16:04.499 3.200 - 3.213: 39.9314% ( 1109) 00:16:04.499 3.213 - 3.226: 45.5047% ( 959) 00:16:04.499 3.226 - 3.238: 50.0785% ( 787) 00:16:04.499 3.238 - 3.251: 53.6061% ( 607) 00:16:04.499 3.251 - 3.264: 57.4534% ( 662) 00:16:04.499 3.264 - 3.277: 61.2774% ( 658) 00:16:04.499 3.277 - 3.302: 68.1641% ( 1185) 00:16:04.499 3.302 - 3.328: 74.7661% ( 1136) 00:16:04.499 3.328 - 3.354: 81.3564% ( 1134) 00:16:04.499 3.354 - 3.379: 85.9476% ( 790) 00:16:04.499 3.379 - 3.405: 87.7608% ( 312) 00:16:04.499 3.405 - 3.430: 88.4408% ( 117) 00:16:04.499 3.430 - 3.456: 89.4055% ( 166) 00:16:04.499 3.456 - 3.482: 90.6898% ( 221) 00:16:04.499 3.482 - 3.507: 92.2996% ( 277) 00:16:04.499 3.507 - 3.533: 93.8978% ( 275) 00:16:04.499 3.533 - 3.558: 95.1299% ( 212) 00:16:04.499 3.558 - 3.584: 96.5712% ( 248) 00:16:04.499 3.584 - 3.610: 97.7858% ( 209) 00:16:04.499 3.610 - 3.635: 98.5587% ( 133) 00:16:04.499 3.635 - 3.661: 99.1108% ( 95) 00:16:04.499 3.661 - 3.686: 99.3375% ( 39) 00:16:04.499 3.686 - 3.712: 99.5234% ( 32) 00:16:04.499 3.712 - 3.738: 99.6164% ( 16) 00:16:04.499 3.738 - 3.763: 99.6397% ( 4) 00:16:04.499 3.763 - 3.789: 99.6455% ( 1) 00:16:04.499 5.299 - 5.325: 99.6513% ( 1) 00:16:04.499 5.427 - 5.453: 99.6629% ( 2) 00:16:04.499 5.683 - 5.709: 99.6687% ( 1) 00:16:04.499 5.786 - 5.811: 99.6804% ( 2) 00:16:04.499 5.811 - 5.837: 99.6862% ( 1) 00:16:04.499 5.862 - 5.888: 99.6920% ( 1) 00:16:04.499 6.246 - 6.272: 99.6978% ( 1) 00:16:04.499 6.400 - 6.426: 99.7036% ( 1) 00:16:04.499 6.477 - 6.502: 99.7094% ( 1) 00:16:04.499 6.502 - 6.528: 99.7152% ( 1) 00:16:04.499 6.554 - 6.605: 99.7269% ( 2) 00:16:04.499 6.605 - 6.656: 99.7327% ( 1) 00:16:04.499 6.656 - 6.707: 99.7443% ( 2) 00:16:04.499 6.758 - 6.810: 99.7559% ( 2) 00:16:04.499 6.810 - 6.861: 99.7617% ( 1) 00:16:04.499 6.861 - 6.912: 99.7675% ( 1) 00:16:04.499 6.912 - 6.963: 99.7850% ( 3) 00:16:04.499 6.963 - 7.014: 99.7908% ( 1) 00:16:04.499 7.014 - 7.066: 99.7966% ( 1) 00:16:04.499 7.168 - 7.219: 99.8082% ( 2) 00:16:04.499 7.219 - 7.270: 99.8140% ( 1) 00:16:04.499 7.475 - 7.526: 99.8315% ( 3) 00:16:04.499 7.578 - 7.629: 99.8373% ( 1) 00:16:04.499 7.731 - 7.782: 99.8489% ( 2) 00:16:04.499 7.834 - 7.885: 99.8547% ( 1) 00:16:04.500 7.885 - 7.936: 99.8605% ( 1) 00:16:04.500 7.936 - 7.987: 99.8663% ( 1) 00:16:04.500 8.090 - 8.141: 99.8780% ( 2) 00:16:04.500 8.141 - 8.192: 99.8838% ( 1) 00:16:04.500 8.602 - 8.653: 99.8896% ( 1) 00:16:04.500 9.984 - 10.035: 99.8954% ( 1) 00:16:04.500 11.315 - 11.366: 99.9012% ( 1) 00:16:04.500 19.149 - 19.251: 99.9070% ( 1) 00:16:04.500 3984.589 - 4010.803: 99.9942% ( 15) 00:16:04.500 4980.736 - 5006.950: 100.0000% ( 1) 00:16:04.500 00:16:04.500 Complete histogram 00:16:04.500 ================== 00:16:04.500 Range in us Cumulative Count 00:16:04.500 1.664 - 1.677: 0.0291% ( 5) 00:16:04.500 1.677 - 1.690: 0.0639% ( 6) 00:16:04.500 1.690 - 1.702: 2.2433% ( 375) 00:16:04.500 1.702 - 1.715: 18.7366% ( 2838) 00:16:04.500 1.715 - 1.728: 37.3859% ( 3209) 00:16:04.500 1.728 - 1.741: 43.9589% ( 1131) 00:16:04.500 1.741 - 1.754: 54.8440% ( 1873) 00:16:04.500 1.754 - 1.766: 78.5146% ( 4073) 00:16:04.500 1.766 - 1.779: 92.4566% ( 2399) 00:16:04.500 1.779 - 1.792: 96.5886% ( 711) 00:16:04.500 1.792 - 1.805: 98.5878% ( 344) 00:16:04.500 1.805 - 1.818: 99.1689% ( 100) 00:16:04.500 1.818 - 1.830: 99.2852% ( 20) 00:16:04.500 1.830 - 1.843: 99.3200% ( 6) 00:16:04.500 1.843 - 1.856: 99.3259% ( 1) 00:16:04.500 1.856 - 1.869: 99.3317% ( 1) 00:16:04.500 1.869 - 1.882: 99.3375% ( 1) 00:16:04.500 1.882 - 1.894: 99.3549% ( 3) 00:16:04.500 1.894 - 1.907: 99.3607% ( 1) 00:16:04.500 1.933 - 1.946: 99.3723% ( 2) 00:16:04.500 2.112 - 2.125: 99.3782% ( 1) 00:16:04.500 4.070 - 4.096: 99.3840% ( 1) 00:16:04.500 4.147 - 4.173: 99.3898% ( 1) 00:16:04.500 4.224 - 4.250: 99.3956% ( 1) 00:16:04.500 4.454 - 4.480: 99.4014% ( 1) 00:16:04.500 4.582 - 4.608: 99.4072% ( 1) 00:16:04.500 4.608 - 4.634: 99.4130% ( 1) 00:16:04.500 4.685 - 4.710: 99.4188% ( 1) 00:16:04.500 4.864 - 4.890: 99.4247% ( 1) 00:16:04.500 4.966 - 4.992: 99.4305% ( 1) 00:16:04.500 5.376 - 5.402: 99.4421% ( 2) 00:16:04.500 5.402 - 5.427: 99.4537% ( 2) 00:16:04.500 5.530 - 5.555: 99.4595% ( 1) 00:16:04.500 5.632 - 5.658: 99.4653% ( 1) 00:16:04.500 5.734 - 5.760: 99.4711% ( 1) 00:16:04.500 5.786 - 5.811: 99.4770% ( 1) 00:16:04.500 5.862 - 5.888: 99.4828% ( 1) 00:16:04.500 5.939 - 5.965: 99.4886% ( 1) 00:16:04.500 6.042 - 6.067: 99.4944% ( 1) 00:16:04.500 6.374 - 6.400: 99.5002% ( 1) 00:16:04.500 6.400 - 6.426: 99.5060% ( 1) 00:16:04.500 8.294 - 8.346: 99.5118% ( 1) 00:16:04.500 48.128 - 48.333: 99.5176% ( 1) 00:16:04.500 368.640 - 370.278: 99.5234% ( 1) 00:16:04.500 3001.549 - 3014.656: 99.5293% ( 1) 00:16:04.500 3014.656 - 3027.763: 99.5351% ( 1) 00:16:04.500 3027.763 - 3040.870: 99.5409% ( 1) 00:16:04.500 3171.942 - 3185.050: 99.5467% ( 1) 00:16:04.500 3984.589 - 4010.803: 99.9709% ( 73) 00:16:04.500 4980.736 - 5006.950: 99.9884% ( 3) 00:16:04.500 6973.030 - 7025.459: 99.9942% ( 1) 00:16:04.500 7182.746 - 7235.174: 100.0000% ( 1) 00:16:04.500 00:16:04.500 23:01:36 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:04.500 23:01:36 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:04.500 23:01:36 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:04.500 23:01:36 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:04.500 23:01:36 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:04.500 [ 00:16:04.500 { 00:16:04.500 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:04.500 "subtype": "Discovery", 00:16:04.500 "listen_addresses": [], 00:16:04.500 "allow_any_host": true, 00:16:04.500 "hosts": [] 00:16:04.500 }, 00:16:04.500 { 00:16:04.500 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:04.500 "subtype": "NVMe", 00:16:04.500 "listen_addresses": [ 00:16:04.500 { 00:16:04.500 "transport": "VFIOUSER", 00:16:04.500 "trtype": "VFIOUSER", 00:16:04.500 "adrfam": "IPv4", 00:16:04.500 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:04.500 "trsvcid": "0" 00:16:04.500 } 00:16:04.500 ], 00:16:04.500 "allow_any_host": true, 00:16:04.500 "hosts": [], 00:16:04.500 "serial_number": "SPDK1", 00:16:04.500 "model_number": "SPDK bdev Controller", 00:16:04.500 "max_namespaces": 32, 00:16:04.500 "min_cntlid": 1, 00:16:04.500 "max_cntlid": 65519, 00:16:04.500 "namespaces": [ 00:16:04.500 { 00:16:04.500 "nsid": 1, 00:16:04.500 "bdev_name": "Malloc1", 00:16:04.500 "name": "Malloc1", 00:16:04.500 "nguid": "367D8BA2506C4965B7F68653FD89E6B4", 00:16:04.500 "uuid": "367d8ba2-506c-4965-b7f6-8653fd89e6b4" 00:16:04.500 }, 00:16:04.500 { 00:16:04.500 "nsid": 2, 00:16:04.500 "bdev_name": "Malloc3", 00:16:04.500 "name": "Malloc3", 00:16:04.500 "nguid": "5FE58D202B4B4AA8919F094EC0D7A285", 00:16:04.500 "uuid": "5fe58d20-2b4b-4aa8-919f-094ec0d7a285" 00:16:04.500 } 00:16:04.500 ] 00:16:04.500 }, 00:16:04.500 { 00:16:04.500 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:04.500 "subtype": "NVMe", 00:16:04.500 "listen_addresses": [ 00:16:04.500 { 00:16:04.500 "transport": "VFIOUSER", 00:16:04.500 "trtype": "VFIOUSER", 00:16:04.500 "adrfam": "IPv4", 00:16:04.500 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:04.500 "trsvcid": "0" 00:16:04.500 } 00:16:04.500 ], 00:16:04.500 "allow_any_host": true, 00:16:04.500 "hosts": [], 00:16:04.500 "serial_number": "SPDK2", 00:16:04.500 "model_number": "SPDK bdev Controller", 00:16:04.500 "max_namespaces": 32, 00:16:04.500 "min_cntlid": 1, 00:16:04.500 "max_cntlid": 65519, 00:16:04.500 "namespaces": [ 00:16:04.500 { 00:16:04.500 "nsid": 1, 00:16:04.500 "bdev_name": "Malloc2", 00:16:04.500 "name": "Malloc2", 00:16:04.500 "nguid": "73BFCD3631C5480D91CB5DB8E80600E9", 00:16:04.500 "uuid": "73bfcd36-31c5-480d-91cb-5db8e80600e9" 00:16:04.500 } 00:16:04.500 ] 00:16:04.500 } 00:16:04.500 ] 00:16:04.759 23:01:36 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:04.759 23:01:36 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3174499 00:16:04.759 23:01:36 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:04.759 23:01:36 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:04.759 23:01:36 -- common/autotest_common.sh@1244 -- # local i=0 00:16:04.759 23:01:36 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:04.760 23:01:36 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:04.760 23:01:36 -- common/autotest_common.sh@1255 -- # return 0 00:16:04.760 23:01:36 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:04.760 23:01:36 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:04.760 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.760 Malloc4 00:16:04.760 23:01:37 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:05.017 23:01:37 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:05.017 Asynchronous Event Request test 00:16:05.017 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:05.017 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:05.017 Registering asynchronous event callbacks... 00:16:05.017 Starting namespace attribute notice tests for all controllers... 00:16:05.017 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:05.017 aer_cb - Changed Namespace 00:16:05.017 Cleaning up... 00:16:05.276 [ 00:16:05.276 { 00:16:05.276 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:05.276 "subtype": "Discovery", 00:16:05.276 "listen_addresses": [], 00:16:05.276 "allow_any_host": true, 00:16:05.276 "hosts": [] 00:16:05.276 }, 00:16:05.276 { 00:16:05.276 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:05.276 "subtype": "NVMe", 00:16:05.276 "listen_addresses": [ 00:16:05.276 { 00:16:05.276 "transport": "VFIOUSER", 00:16:05.276 "trtype": "VFIOUSER", 00:16:05.276 "adrfam": "IPv4", 00:16:05.276 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:05.276 "trsvcid": "0" 00:16:05.276 } 00:16:05.276 ], 00:16:05.276 "allow_any_host": true, 00:16:05.276 "hosts": [], 00:16:05.276 "serial_number": "SPDK1", 00:16:05.276 "model_number": "SPDK bdev Controller", 00:16:05.276 "max_namespaces": 32, 00:16:05.276 "min_cntlid": 1, 00:16:05.276 "max_cntlid": 65519, 00:16:05.276 "namespaces": [ 00:16:05.276 { 00:16:05.276 "nsid": 1, 00:16:05.276 "bdev_name": "Malloc1", 00:16:05.276 "name": "Malloc1", 00:16:05.276 "nguid": "367D8BA2506C4965B7F68653FD89E6B4", 00:16:05.276 "uuid": "367d8ba2-506c-4965-b7f6-8653fd89e6b4" 00:16:05.276 }, 00:16:05.276 { 00:16:05.276 "nsid": 2, 00:16:05.276 "bdev_name": "Malloc3", 00:16:05.276 "name": "Malloc3", 00:16:05.276 "nguid": "5FE58D202B4B4AA8919F094EC0D7A285", 00:16:05.276 "uuid": "5fe58d20-2b4b-4aa8-919f-094ec0d7a285" 00:16:05.276 } 00:16:05.276 ] 00:16:05.276 }, 00:16:05.276 { 00:16:05.276 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:05.276 "subtype": "NVMe", 00:16:05.276 "listen_addresses": [ 00:16:05.276 { 00:16:05.276 "transport": "VFIOUSER", 00:16:05.276 "trtype": "VFIOUSER", 00:16:05.276 "adrfam": "IPv4", 00:16:05.276 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:05.276 "trsvcid": "0" 00:16:05.276 } 00:16:05.276 ], 00:16:05.276 "allow_any_host": true, 00:16:05.276 "hosts": [], 00:16:05.276 "serial_number": "SPDK2", 00:16:05.276 "model_number": "SPDK bdev Controller", 00:16:05.276 "max_namespaces": 32, 00:16:05.276 "min_cntlid": 1, 00:16:05.276 "max_cntlid": 65519, 00:16:05.276 "namespaces": [ 00:16:05.276 { 00:16:05.276 "nsid": 1, 00:16:05.276 "bdev_name": "Malloc2", 00:16:05.276 "name": "Malloc2", 00:16:05.276 "nguid": "73BFCD3631C5480D91CB5DB8E80600E9", 00:16:05.276 "uuid": "73bfcd36-31c5-480d-91cb-5db8e80600e9" 00:16:05.276 }, 00:16:05.276 { 00:16:05.276 "nsid": 2, 00:16:05.276 "bdev_name": "Malloc4", 00:16:05.276 "name": "Malloc4", 00:16:05.276 "nguid": "A35428B52C974502B69052E710815B45", 00:16:05.276 "uuid": "a35428b5-2c97-4502-b690-52e710815b45" 00:16:05.276 } 00:16:05.276 ] 00:16:05.276 } 00:16:05.276 ] 00:16:05.276 23:01:37 -- target/nvmf_vfio_user.sh@44 -- # wait 3174499 00:16:05.276 23:01:37 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:05.276 23:01:37 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3166426 00:16:05.276 23:01:37 -- common/autotest_common.sh@926 -- # '[' -z 3166426 ']' 00:16:05.276 23:01:37 -- common/autotest_common.sh@930 -- # kill -0 3166426 00:16:05.276 23:01:37 -- common/autotest_common.sh@931 -- # uname 00:16:05.276 23:01:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:05.276 23:01:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3166426 00:16:05.276 23:01:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:05.276 23:01:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:05.276 23:01:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3166426' 00:16:05.276 killing process with pid 3166426 00:16:05.276 23:01:37 -- common/autotest_common.sh@945 -- # kill 3166426 00:16:05.276 [2024-07-24 23:01:37.543338] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:05.276 23:01:37 -- common/autotest_common.sh@950 -- # wait 3166426 00:16:05.534 23:01:37 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:05.534 23:01:37 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:05.534 23:01:37 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:05.534 23:01:37 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:05.534 23:01:37 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:05.534 23:01:37 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3174519 00:16:05.534 23:01:37 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3174519' 00:16:05.534 Process pid: 3174519 00:16:05.534 23:01:37 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:05.534 23:01:37 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:05.534 23:01:37 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3174519 00:16:05.534 23:01:37 -- common/autotest_common.sh@819 -- # '[' -z 3174519 ']' 00:16:05.534 23:01:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.534 23:01:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:05.534 23:01:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.534 23:01:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:05.534 23:01:37 -- common/autotest_common.sh@10 -- # set +x 00:16:05.534 [2024-07-24 23:01:37.844413] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:05.534 [2024-07-24 23:01:37.845387] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:05.534 [2024-07-24 23:01:37.845430] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.534 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.534 [2024-07-24 23:01:37.926408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:05.534 [2024-07-24 23:01:37.963394] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:05.534 [2024-07-24 23:01:37.963507] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.534 [2024-07-24 23:01:37.963517] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.534 [2024-07-24 23:01:37.963531] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.534 [2024-07-24 23:01:37.963576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.534 [2024-07-24 23:01:37.963593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.534 [2024-07-24 23:01:37.963677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:05.534 [2024-07-24 23:01:37.963679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.792 [2024-07-24 23:01:38.031707] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:16:05.792 [2024-07-24 23:01:38.031865] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:16:05.792 [2024-07-24 23:01:38.031987] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:16:05.792 [2024-07-24 23:01:38.032487] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:05.792 [2024-07-24 23:01:38.032580] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:16:06.359 23:01:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:06.359 23:01:38 -- common/autotest_common.sh@852 -- # return 0 00:16:06.359 23:01:38 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:07.294 23:01:39 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:07.553 23:01:39 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:07.553 23:01:39 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:07.553 23:01:39 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:07.553 23:01:39 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:07.553 23:01:39 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:07.813 Malloc1 00:16:07.813 23:01:40 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:07.813 23:01:40 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:08.071 23:01:40 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:08.330 23:01:40 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:08.330 23:01:40 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:08.330 23:01:40 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:08.330 Malloc2 00:16:08.330 23:01:40 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:08.588 23:01:40 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:08.846 23:01:41 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:08.846 23:01:41 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:08.846 23:01:41 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3174519 00:16:08.846 23:01:41 -- common/autotest_common.sh@926 -- # '[' -z 3174519 ']' 00:16:08.846 23:01:41 -- common/autotest_common.sh@930 -- # kill -0 3174519 00:16:08.846 23:01:41 -- common/autotest_common.sh@931 -- # uname 00:16:08.846 23:01:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:08.846 23:01:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3174519 00:16:09.105 23:01:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:09.105 23:01:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:09.105 23:01:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3174519' 00:16:09.105 killing process with pid 3174519 00:16:09.105 23:01:41 -- common/autotest_common.sh@945 -- # kill 3174519 00:16:09.105 23:01:41 -- common/autotest_common.sh@950 -- # wait 3174519 00:16:09.105 23:01:41 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:09.105 23:01:41 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:09.105 00:16:09.105 real 0m51.240s 00:16:09.105 user 3m21.810s 00:16:09.105 sys 0m4.633s 00:16:09.105 23:01:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.105 23:01:41 -- common/autotest_common.sh@10 -- # set +x 00:16:09.105 ************************************ 00:16:09.105 END TEST nvmf_vfio_user 00:16:09.105 ************************************ 00:16:09.364 23:01:41 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:09.364 23:01:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:09.364 23:01:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:09.364 23:01:41 -- common/autotest_common.sh@10 -- # set +x 00:16:09.364 ************************************ 00:16:09.364 START TEST nvmf_vfio_user_nvme_compliance 00:16:09.364 ************************************ 00:16:09.364 23:01:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:09.364 * Looking for test storage... 00:16:09.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:09.364 23:01:41 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:09.364 23:01:41 -- nvmf/common.sh@7 -- # uname -s 00:16:09.364 23:01:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.364 23:01:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.364 23:01:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.364 23:01:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.364 23:01:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.364 23:01:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.364 23:01:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.364 23:01:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.364 23:01:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.364 23:01:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.365 23:01:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:09.365 23:01:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:09.365 23:01:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.365 23:01:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.365 23:01:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:09.365 23:01:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:09.365 23:01:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.365 23:01:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.365 23:01:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.365 23:01:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.365 23:01:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.365 23:01:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.365 23:01:41 -- paths/export.sh@5 -- # export PATH 00:16:09.365 23:01:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.365 23:01:41 -- nvmf/common.sh@46 -- # : 0 00:16:09.365 23:01:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:09.365 23:01:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:09.365 23:01:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:09.365 23:01:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.365 23:01:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.365 23:01:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:09.365 23:01:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:09.365 23:01:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:09.365 23:01:41 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:09.365 23:01:41 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:09.365 23:01:41 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:09.365 23:01:41 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:09.365 23:01:41 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:09.365 23:01:41 -- compliance/compliance.sh@20 -- # nvmfpid=3175388 00:16:09.365 23:01:41 -- compliance/compliance.sh@21 -- # echo 'Process pid: 3175388' 00:16:09.365 Process pid: 3175388 00:16:09.365 23:01:41 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:09.365 23:01:41 -- compliance/compliance.sh@24 -- # waitforlisten 3175388 00:16:09.365 23:01:41 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:09.365 23:01:41 -- common/autotest_common.sh@819 -- # '[' -z 3175388 ']' 00:16:09.365 23:01:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.365 23:01:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:09.365 23:01:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.365 23:01:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:09.365 23:01:41 -- common/autotest_common.sh@10 -- # set +x 00:16:09.365 [2024-07-24 23:01:41.738101] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:09.365 [2024-07-24 23:01:41.738158] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.365 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.624 [2024-07-24 23:01:41.811658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:09.624 [2024-07-24 23:01:41.847700] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:09.624 [2024-07-24 23:01:41.847848] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.624 [2024-07-24 23:01:41.847858] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.624 [2024-07-24 23:01:41.847867] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.624 [2024-07-24 23:01:41.847916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.624 [2024-07-24 23:01:41.848012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.624 [2024-07-24 23:01:41.848015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.191 23:01:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:10.191 23:01:42 -- common/autotest_common.sh@852 -- # return 0 00:16:10.191 23:01:42 -- compliance/compliance.sh@26 -- # sleep 1 00:16:11.139 23:01:43 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:11.139 23:01:43 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:11.139 23:01:43 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:11.139 23:01:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.139 23:01:43 -- common/autotest_common.sh@10 -- # set +x 00:16:11.139 23:01:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.139 23:01:43 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:11.139 23:01:43 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:11.139 23:01:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.139 23:01:43 -- common/autotest_common.sh@10 -- # set +x 00:16:11.139 malloc0 00:16:11.139 23:01:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.139 23:01:43 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:11.139 23:01:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.139 23:01:43 -- common/autotest_common.sh@10 -- # set +x 00:16:11.402 23:01:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.402 23:01:43 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:11.402 23:01:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.402 23:01:43 -- common/autotest_common.sh@10 -- # set +x 00:16:11.402 23:01:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.402 23:01:43 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:11.402 23:01:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.402 23:01:43 -- common/autotest_common.sh@10 -- # set +x 00:16:11.402 23:01:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.402 23:01:43 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:11.402 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.402 00:16:11.402 00:16:11.402 CUnit - A unit testing framework for C - Version 2.1-3 00:16:11.402 http://cunit.sourceforge.net/ 00:16:11.402 00:16:11.402 00:16:11.402 Suite: nvme_compliance 00:16:11.402 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 23:01:43.768051] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:11.402 [2024-07-24 23:01:43.768099] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:11.402 [2024-07-24 23:01:43.768108] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:11.402 passed 00:16:11.661 Test: admin_identify_ctrlr_verify_fused ...passed 00:16:11.661 Test: admin_identify_ns ...[2024-07-24 23:01:43.987738] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:11.661 [2024-07-24 23:01:43.995725] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:11.661 passed 00:16:11.919 Test: admin_get_features_mandatory_features ...passed 00:16:11.919 Test: admin_get_features_optional_features ...passed 00:16:12.178 Test: admin_set_features_number_of_queues ...passed 00:16:12.178 Test: admin_get_log_page_mandatory_logs ...passed 00:16:12.178 Test: admin_get_log_page_with_lpo ...[2024-07-24 23:01:44.576725] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:12.437 passed 00:16:12.437 Test: fabric_property_get ...passed 00:16:12.437 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 23:01:44.744008] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:12.437 passed 00:16:12.695 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 23:01:44.908721] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:12.695 [2024-07-24 23:01:44.921724] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:12.695 passed 00:16:12.695 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 23:01:45.002553] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:12.695 passed 00:16:12.954 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 23:01:45.157723] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:12.954 [2024-07-24 23:01:45.181726] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:12.954 passed 00:16:12.954 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 23:01:45.262474] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:12.954 [2024-07-24 23:01:45.262505] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:12.954 passed 00:16:13.212 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 23:01:45.434738] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:13.212 [2024-07-24 23:01:45.442726] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:13.212 [2024-07-24 23:01:45.450726] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:13.212 [2024-07-24 23:01:45.458720] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:13.212 passed 00:16:13.212 Test: admin_create_io_sq_verify_pc ...[2024-07-24 23:01:45.574731] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:13.212 passed 00:16:14.626 Test: admin_create_io_qp_max_qps ...[2024-07-24 23:01:46.766732] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:14.884 passed 00:16:15.143 Test: admin_create_io_sq_shared_cq ...[2024-07-24 23:01:47.360725] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:15.143 passed 00:16:15.143 00:16:15.143 Run Summary: Type Total Ran Passed Failed Inactive 00:16:15.143 suites 1 1 n/a 0 0 00:16:15.143 tests 18 18 18 0 0 00:16:15.143 asserts 360 360 360 0 n/a 00:16:15.143 00:16:15.143 Elapsed time = 1.494 seconds 00:16:15.143 23:01:47 -- compliance/compliance.sh@42 -- # killprocess 3175388 00:16:15.143 23:01:47 -- common/autotest_common.sh@926 -- # '[' -z 3175388 ']' 00:16:15.143 23:01:47 -- common/autotest_common.sh@930 -- # kill -0 3175388 00:16:15.143 23:01:47 -- common/autotest_common.sh@931 -- # uname 00:16:15.143 23:01:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:15.143 23:01:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3175388 00:16:15.143 23:01:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:15.143 23:01:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:15.143 23:01:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3175388' 00:16:15.143 killing process with pid 3175388 00:16:15.143 23:01:47 -- common/autotest_common.sh@945 -- # kill 3175388 00:16:15.143 23:01:47 -- common/autotest_common.sh@950 -- # wait 3175388 00:16:15.402 23:01:47 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:15.402 23:01:47 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:15.402 00:16:15.402 real 0m6.124s 00:16:15.402 user 0m17.303s 00:16:15.402 sys 0m0.684s 00:16:15.402 23:01:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.402 23:01:47 -- common/autotest_common.sh@10 -- # set +x 00:16:15.402 ************************************ 00:16:15.402 END TEST nvmf_vfio_user_nvme_compliance 00:16:15.402 ************************************ 00:16:15.402 23:01:47 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:15.402 23:01:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:15.402 23:01:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:15.402 23:01:47 -- common/autotest_common.sh@10 -- # set +x 00:16:15.402 ************************************ 00:16:15.402 START TEST nvmf_vfio_user_fuzz 00:16:15.402 ************************************ 00:16:15.402 23:01:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:15.402 * Looking for test storage... 00:16:15.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:15.661 23:01:47 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:15.661 23:01:47 -- nvmf/common.sh@7 -- # uname -s 00:16:15.661 23:01:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.661 23:01:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.661 23:01:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.661 23:01:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.661 23:01:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.661 23:01:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.661 23:01:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.661 23:01:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.661 23:01:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.661 23:01:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.661 23:01:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:15.661 23:01:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:15.661 23:01:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.661 23:01:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.661 23:01:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:15.661 23:01:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:15.661 23:01:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.661 23:01:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.661 23:01:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.661 23:01:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.661 23:01:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.661 23:01:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.661 23:01:47 -- paths/export.sh@5 -- # export PATH 00:16:15.661 23:01:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.661 23:01:47 -- nvmf/common.sh@46 -- # : 0 00:16:15.661 23:01:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:15.661 23:01:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:15.661 23:01:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:15.661 23:01:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.661 23:01:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.661 23:01:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:15.661 23:01:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:15.661 23:01:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:15.661 23:01:47 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:15.661 23:01:47 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:15.661 23:01:47 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:15.661 23:01:47 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:15.661 23:01:47 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:15.661 23:01:47 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:15.661 23:01:47 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:15.661 23:01:47 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3176511 00:16:15.661 23:01:47 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3176511' 00:16:15.661 Process pid: 3176511 00:16:15.661 23:01:47 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:15.661 23:01:47 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3176511 00:16:15.661 23:01:47 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:15.661 23:01:47 -- common/autotest_common.sh@819 -- # '[' -z 3176511 ']' 00:16:15.661 23:01:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.661 23:01:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:15.661 23:01:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.661 23:01:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:15.661 23:01:47 -- common/autotest_common.sh@10 -- # set +x 00:16:16.598 23:01:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:16.598 23:01:48 -- common/autotest_common.sh@852 -- # return 0 00:16:16.598 23:01:48 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:17.535 23:01:49 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:17.535 23:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.535 23:01:49 -- common/autotest_common.sh@10 -- # set +x 00:16:17.535 23:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.535 23:01:49 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:17.536 23:01:49 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:17.536 23:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.536 23:01:49 -- common/autotest_common.sh@10 -- # set +x 00:16:17.536 malloc0 00:16:17.536 23:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.536 23:01:49 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:17.536 23:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.536 23:01:49 -- common/autotest_common.sh@10 -- # set +x 00:16:17.536 23:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.536 23:01:49 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:17.536 23:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.536 23:01:49 -- common/autotest_common.sh@10 -- # set +x 00:16:17.536 23:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.536 23:01:49 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:17.536 23:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.536 23:01:49 -- common/autotest_common.sh@10 -- # set +x 00:16:17.536 23:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.536 23:01:49 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:17.536 23:01:49 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:49.626 Fuzzing completed. Shutting down the fuzz application 00:16:49.626 00:16:49.626 Dumping successful admin opcodes: 00:16:49.626 8, 9, 10, 24, 00:16:49.626 Dumping successful io opcodes: 00:16:49.626 0, 00:16:49.626 NS: 0x200003a1ef00 I/O qp, Total commands completed: 897975, total successful commands: 3505, random_seed: 2973293056 00:16:49.626 NS: 0x200003a1ef00 admin qp, Total commands completed: 224432, total successful commands: 1806, random_seed: 1819794496 00:16:49.626 23:02:20 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:49.626 23:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.626 23:02:20 -- common/autotest_common.sh@10 -- # set +x 00:16:49.626 23:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.626 23:02:20 -- target/vfio_user_fuzz.sh@46 -- # killprocess 3176511 00:16:49.626 23:02:20 -- common/autotest_common.sh@926 -- # '[' -z 3176511 ']' 00:16:49.626 23:02:20 -- common/autotest_common.sh@930 -- # kill -0 3176511 00:16:49.626 23:02:20 -- common/autotest_common.sh@931 -- # uname 00:16:49.626 23:02:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:49.626 23:02:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3176511 00:16:49.626 23:02:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:49.626 23:02:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:49.626 23:02:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3176511' 00:16:49.626 killing process with pid 3176511 00:16:49.626 23:02:20 -- common/autotest_common.sh@945 -- # kill 3176511 00:16:49.626 23:02:20 -- common/autotest_common.sh@950 -- # wait 3176511 00:16:49.626 23:02:20 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:49.626 23:02:20 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:49.626 00:16:49.626 real 0m32.743s 00:16:49.626 user 0m32.215s 00:16:49.626 sys 0m29.672s 00:16:49.626 23:02:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:49.626 23:02:20 -- common/autotest_common.sh@10 -- # set +x 00:16:49.626 ************************************ 00:16:49.626 END TEST nvmf_vfio_user_fuzz 00:16:49.626 ************************************ 00:16:49.626 23:02:20 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:49.626 23:02:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:49.626 23:02:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:49.626 23:02:20 -- common/autotest_common.sh@10 -- # set +x 00:16:49.626 ************************************ 00:16:49.626 START TEST nvmf_host_management 00:16:49.626 ************************************ 00:16:49.626 23:02:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:49.626 * Looking for test storage... 00:16:49.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:49.626 23:02:20 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:49.626 23:02:20 -- nvmf/common.sh@7 -- # uname -s 00:16:49.627 23:02:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.627 23:02:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.627 23:02:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.627 23:02:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.627 23:02:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.627 23:02:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.627 23:02:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.627 23:02:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.627 23:02:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.627 23:02:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.627 23:02:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:49.627 23:02:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:49.627 23:02:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.627 23:02:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.627 23:02:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:49.627 23:02:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:49.627 23:02:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.627 23:02:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.627 23:02:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.627 23:02:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.627 23:02:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.627 23:02:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.627 23:02:20 -- paths/export.sh@5 -- # export PATH 00:16:49.627 23:02:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.627 23:02:20 -- nvmf/common.sh@46 -- # : 0 00:16:49.627 23:02:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:49.627 23:02:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:49.627 23:02:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:49.627 23:02:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.627 23:02:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.627 23:02:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:49.627 23:02:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:49.627 23:02:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:49.627 23:02:20 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:49.627 23:02:20 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:49.627 23:02:20 -- target/host_management.sh@104 -- # nvmftestinit 00:16:49.627 23:02:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:49.627 23:02:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.627 23:02:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:49.627 23:02:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:49.627 23:02:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:49.627 23:02:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.627 23:02:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.627 23:02:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.627 23:02:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:49.627 23:02:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:49.627 23:02:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:49.627 23:02:20 -- common/autotest_common.sh@10 -- # set +x 00:16:54.904 23:02:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:54.904 23:02:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:54.904 23:02:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:54.904 23:02:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:54.904 23:02:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:54.904 23:02:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:54.904 23:02:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:54.904 23:02:27 -- nvmf/common.sh@294 -- # net_devs=() 00:16:54.904 23:02:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:54.904 23:02:27 -- nvmf/common.sh@295 -- # e810=() 00:16:54.904 23:02:27 -- nvmf/common.sh@295 -- # local -ga e810 00:16:54.904 23:02:27 -- nvmf/common.sh@296 -- # x722=() 00:16:54.904 23:02:27 -- nvmf/common.sh@296 -- # local -ga x722 00:16:54.904 23:02:27 -- nvmf/common.sh@297 -- # mlx=() 00:16:54.904 23:02:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:54.904 23:02:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:54.904 23:02:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:54.904 23:02:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:54.904 23:02:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:54.904 23:02:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:54.904 23:02:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:54.904 23:02:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:54.904 23:02:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:54.904 23:02:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:54.904 23:02:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:54.904 23:02:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:54.904 23:02:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:54.904 23:02:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:54.904 23:02:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:54.904 23:02:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:54.904 23:02:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:54.904 23:02:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:54.904 23:02:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:54.904 23:02:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:54.904 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:54.904 23:02:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:54.904 23:02:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:54.904 23:02:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.904 23:02:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.904 23:02:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:54.904 23:02:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:54.904 23:02:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:54.904 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:54.904 23:02:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:54.904 23:02:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:54.904 23:02:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.904 23:02:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.904 23:02:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:54.904 23:02:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:54.904 23:02:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:54.904 23:02:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:54.904 23:02:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:54.904 23:02:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.904 23:02:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:54.904 23:02:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.904 23:02:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:54.904 Found net devices under 0000:af:00.0: cvl_0_0 00:16:54.904 23:02:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.904 23:02:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:54.904 23:02:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.904 23:02:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:54.904 23:02:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.904 23:02:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:54.904 Found net devices under 0000:af:00.1: cvl_0_1 00:16:54.904 23:02:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.905 23:02:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:54.905 23:02:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:54.905 23:02:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:54.905 23:02:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:54.905 23:02:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:54.905 23:02:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.905 23:02:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.905 23:02:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:54.905 23:02:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:54.905 23:02:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:54.905 23:02:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:54.905 23:02:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:54.905 23:02:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:54.905 23:02:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.905 23:02:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:54.905 23:02:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:54.905 23:02:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:54.905 23:02:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:54.905 23:02:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:54.905 23:02:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:54.905 23:02:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:54.905 23:02:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:55.164 23:02:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:55.164 23:02:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:55.164 23:02:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:55.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:55.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:16:55.164 00:16:55.164 --- 10.0.0.2 ping statistics --- 00:16:55.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.164 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:16:55.164 23:02:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:55.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:55.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:16:55.164 00:16:55.164 --- 10.0.0.1 ping statistics --- 00:16:55.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.164 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:16:55.164 23:02:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:55.164 23:02:27 -- nvmf/common.sh@410 -- # return 0 00:16:55.164 23:02:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:55.164 23:02:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:55.164 23:02:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:55.164 23:02:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:55.164 23:02:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:55.164 23:02:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:55.164 23:02:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:55.164 23:02:27 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:16:55.165 23:02:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:55.165 23:02:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:55.165 23:02:27 -- common/autotest_common.sh@10 -- # set +x 00:16:55.165 ************************************ 00:16:55.165 START TEST nvmf_host_management 00:16:55.165 ************************************ 00:16:55.165 23:02:27 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:16:55.165 23:02:27 -- target/host_management.sh@69 -- # starttarget 00:16:55.165 23:02:27 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:55.165 23:02:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:55.165 23:02:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:55.165 23:02:27 -- common/autotest_common.sh@10 -- # set +x 00:16:55.165 23:02:27 -- nvmf/common.sh@469 -- # nvmfpid=3185241 00:16:55.165 23:02:27 -- nvmf/common.sh@470 -- # waitforlisten 3185241 00:16:55.165 23:02:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:55.165 23:02:27 -- common/autotest_common.sh@819 -- # '[' -z 3185241 ']' 00:16:55.165 23:02:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.165 23:02:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:55.165 23:02:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.165 23:02:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:55.165 23:02:27 -- common/autotest_common.sh@10 -- # set +x 00:16:55.165 [2024-07-24 23:02:27.515633] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:55.165 [2024-07-24 23:02:27.515693] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.165 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.165 [2024-07-24 23:02:27.592271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:55.424 [2024-07-24 23:02:27.631136] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:55.424 [2024-07-24 23:02:27.631245] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:55.424 [2024-07-24 23:02:27.631256] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:55.424 [2024-07-24 23:02:27.631265] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:55.424 [2024-07-24 23:02:27.631366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.424 [2024-07-24 23:02:27.631451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:55.424 [2024-07-24 23:02:27.631559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.424 [2024-07-24 23:02:27.631561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:55.993 23:02:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:55.993 23:02:28 -- common/autotest_common.sh@852 -- # return 0 00:16:55.993 23:02:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:55.993 23:02:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:55.993 23:02:28 -- common/autotest_common.sh@10 -- # set +x 00:16:55.993 23:02:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.993 23:02:28 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:55.993 23:02:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.993 23:02:28 -- common/autotest_common.sh@10 -- # set +x 00:16:55.993 [2024-07-24 23:02:28.376067] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.993 23:02:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:55.993 23:02:28 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:55.993 23:02:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:55.993 23:02:28 -- common/autotest_common.sh@10 -- # set +x 00:16:55.993 23:02:28 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:55.993 23:02:28 -- target/host_management.sh@23 -- # cat 00:16:55.993 23:02:28 -- target/host_management.sh@30 -- # rpc_cmd 00:16:55.993 23:02:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.993 23:02:28 -- common/autotest_common.sh@10 -- # set +x 00:16:55.993 Malloc0 00:16:56.253 [2024-07-24 23:02:28.442785] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.253 23:02:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:56.253 23:02:28 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:56.253 23:02:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:56.253 23:02:28 -- common/autotest_common.sh@10 -- # set +x 00:16:56.253 23:02:28 -- target/host_management.sh@73 -- # perfpid=3185544 00:16:56.253 23:02:28 -- target/host_management.sh@74 -- # waitforlisten 3185544 /var/tmp/bdevperf.sock 00:16:56.253 23:02:28 -- common/autotest_common.sh@819 -- # '[' -z 3185544 ']' 00:16:56.253 23:02:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:56.253 23:02:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:56.253 23:02:28 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:56.253 23:02:28 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:56.253 23:02:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:56.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:56.253 23:02:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:56.253 23:02:28 -- nvmf/common.sh@520 -- # config=() 00:16:56.253 23:02:28 -- common/autotest_common.sh@10 -- # set +x 00:16:56.253 23:02:28 -- nvmf/common.sh@520 -- # local subsystem config 00:16:56.253 23:02:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:56.253 23:02:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:56.253 { 00:16:56.253 "params": { 00:16:56.253 "name": "Nvme$subsystem", 00:16:56.253 "trtype": "$TEST_TRANSPORT", 00:16:56.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:56.253 "adrfam": "ipv4", 00:16:56.253 "trsvcid": "$NVMF_PORT", 00:16:56.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:56.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:56.253 "hdgst": ${hdgst:-false}, 00:16:56.253 "ddgst": ${ddgst:-false} 00:16:56.253 }, 00:16:56.253 "method": "bdev_nvme_attach_controller" 00:16:56.253 } 00:16:56.253 EOF 00:16:56.253 )") 00:16:56.253 23:02:28 -- nvmf/common.sh@542 -- # cat 00:16:56.253 23:02:28 -- nvmf/common.sh@544 -- # jq . 00:16:56.253 23:02:28 -- nvmf/common.sh@545 -- # IFS=, 00:16:56.253 23:02:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:56.253 "params": { 00:16:56.253 "name": "Nvme0", 00:16:56.253 "trtype": "tcp", 00:16:56.253 "traddr": "10.0.0.2", 00:16:56.253 "adrfam": "ipv4", 00:16:56.253 "trsvcid": "4420", 00:16:56.253 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:56.253 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:56.253 "hdgst": false, 00:16:56.253 "ddgst": false 00:16:56.253 }, 00:16:56.253 "method": "bdev_nvme_attach_controller" 00:16:56.253 }' 00:16:56.253 [2024-07-24 23:02:28.542930] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:56.253 [2024-07-24 23:02:28.542981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3185544 ] 00:16:56.253 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.253 [2024-07-24 23:02:28.615384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.253 [2024-07-24 23:02:28.651044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.513 Running I/O for 10 seconds... 00:16:57.082 23:02:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:57.082 23:02:29 -- common/autotest_common.sh@852 -- # return 0 00:16:57.082 23:02:29 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:57.082 23:02:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:57.082 23:02:29 -- common/autotest_common.sh@10 -- # set +x 00:16:57.082 23:02:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:57.082 23:02:29 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:57.082 23:02:29 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:57.082 23:02:29 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:57.082 23:02:29 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:57.082 23:02:29 -- target/host_management.sh@52 -- # local ret=1 00:16:57.082 23:02:29 -- target/host_management.sh@53 -- # local i 00:16:57.082 23:02:29 -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:57.082 23:02:29 -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:57.082 23:02:29 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:57.082 23:02:29 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:57.082 23:02:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:57.082 23:02:29 -- common/autotest_common.sh@10 -- # set +x 00:16:57.082 23:02:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:57.082 23:02:29 -- target/host_management.sh@55 -- # read_io_count=1886 00:16:57.082 23:02:29 -- target/host_management.sh@58 -- # '[' 1886 -ge 100 ']' 00:16:57.082 23:02:29 -- target/host_management.sh@59 -- # ret=0 00:16:57.082 23:02:29 -- target/host_management.sh@60 -- # break 00:16:57.082 23:02:29 -- target/host_management.sh@64 -- # return 0 00:16:57.082 23:02:29 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:57.082 23:02:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:57.082 23:02:29 -- common/autotest_common.sh@10 -- # set +x 00:16:57.082 [2024-07-24 23:02:29.414246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.082 [2024-07-24 23:02:29.414300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.082 [2024-07-24 23:02:29.414311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2600860 is same with the state(5) to be set 00:16:57.083 [2024-07-24 23:02:29.414941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.083 [2024-07-24 23:02:29.414976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.083 [2024-07-24 23:02:29.414994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.083 [2024-07-24 23:02:29.415004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.083 [2024-07-24 23:02:29.415016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.083 [2024-07-24 23:02:29.415026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.083 [2024-07-24 23:02:29.415037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.083 [2024-07-24 23:02:29.415047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.083 [2024-07-24 23:02:29.415058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.083 [2024-07-24 23:02:29.415067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.083 [2024-07-24 23:02:29.415078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.083 [2024-07-24 23:02:29.415088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.083 [2024-07-24 23:02:29.415099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.083 [2024-07-24 23:02:29.415108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.083 [2024-07-24 23:02:29.415119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.083 [2024-07-24 23:02:29.415128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.083 [2024-07-24 23:02:29.415139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.083 [2024-07-24 23:02:29.415148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.083 [2024-07-24 23:02:29.415159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.083 [2024-07-24 23:02:29.415169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.083 [2024-07-24 23:02:29.415179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.083 [2024-07-24 23:02:29.415188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.083 [2024-07-24 23:02:29.415199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.083 [2024-07-24 23:02:29.415212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.083 [2024-07-24 23:02:29.415223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.415981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.415992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.416003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.416014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.416024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.084 [2024-07-24 23:02:29.416035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.084 [2024-07-24 23:02:29.416044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.085 [2024-07-24 23:02:29.416055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.085 [2024-07-24 23:02:29.416065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.085 [2024-07-24 23:02:29.416075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.085 [2024-07-24 23:02:29.416085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.085 [2024-07-24 23:02:29.416096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.085 [2024-07-24 23:02:29.416105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.085 [2024-07-24 23:02:29.416116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.085 [2024-07-24 23:02:29.416126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.085 [2024-07-24 23:02:29.416137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.085 [2024-07-24 23:02:29.416147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.085 [2024-07-24 23:02:29.416158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.085 [2024-07-24 23:02:29.416167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.085 [2024-07-24 23:02:29.416178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.085 [2024-07-24 23:02:29.416187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.085 [2024-07-24 23:02:29.416198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.085 [2024-07-24 23:02:29.416208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.085 [2024-07-24 23:02:29.416218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.085 [2024-07-24 23:02:29.416229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.085 [2024-07-24 23:02:29.416241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.085 [2024-07-24 23:02:29.416250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.085 [2024-07-24 23:02:29.416263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.085 [2024-07-24 23:02:29.416272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.085 [2024-07-24 23:02:29.416283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.085 [2024-07-24 23:02:29.416293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.085 [2024-07-24 23:02:29.416303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192ed40 is same with the state(5) to be set 00:16:57.085 [2024-07-24 23:02:29.416358] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x192ed40 was disconnected and freed. reset controller. 00:16:57.085 [2024-07-24 23:02:29.416400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.085 [2024-07-24 23:02:29.416412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.085 [2024-07-24 23:02:29.416422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.085 [2024-07-24 23:02:29.416432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.085 [2024-07-24 23:02:29.416442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.085 [2024-07-24 23:02:29.416452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.085 [2024-07-24 23:02:29.416461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.085 [2024-07-24 23:02:29.416471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.085 [2024-07-24 23:02:29.416481] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1931090 is same with the state(5) to be set 00:16:57.085 [2024-07-24 23:02:29.417333] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:57.085 task offset: 3456 on job bdev=Nvme0n1 fails 00:16:57.085 00:16:57.085 Latency(us) 00:16:57.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.085 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:57.085 Job: Nvme0n1 ended in about 0.58 seconds with error 00:16:57.085 Verification LBA range: start 0x0 length 0x400 00:16:57.085 Nvme0n1 : 0.58 3570.10 223.13 110.92 0.00 17164.18 3198.16 30828.13 00:16:57.085 =================================================================================================================== 00:16:57.085 Total : 3570.10 223.13 110.92 0.00 17164.18 3198.16 30828.13 00:16:57.085 [2024-07-24 23:02:29.418824] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:57.085 [2024-07-24 23:02:29.418841] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1931090 (9): Bad file descriptor 00:16:57.085 23:02:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:57.085 23:02:29 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:57.085 23:02:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:57.085 23:02:29 -- common/autotest_common.sh@10 -- # set +x 00:16:57.085 [2024-07-24 23:02:29.425675] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:16:57.085 [2024-07-24 23:02:29.425778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:57.085 [2024-07-24 23:02:29.425804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.085 [2024-07-24 23:02:29.425820] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:16:57.085 [2024-07-24 23:02:29.425830] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:16:57.085 [2024-07-24 23:02:29.425841] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:16:57.085 [2024-07-24 23:02:29.425850] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1931090 00:16:57.085 [2024-07-24 23:02:29.425873] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1931090 (9): Bad file descriptor 00:16:57.085 [2024-07-24 23:02:29.425888] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:57.085 [2024-07-24 23:02:29.425899] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:57.085 [2024-07-24 23:02:29.425910] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:57.085 [2024-07-24 23:02:29.425924] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:57.085 23:02:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:57.085 23:02:29 -- target/host_management.sh@87 -- # sleep 1 00:16:58.060 23:02:30 -- target/host_management.sh@91 -- # kill -9 3185544 00:16:58.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3185544) - No such process 00:16:58.060 23:02:30 -- target/host_management.sh@91 -- # true 00:16:58.060 23:02:30 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:58.060 23:02:30 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:58.060 23:02:30 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:58.060 23:02:30 -- nvmf/common.sh@520 -- # config=() 00:16:58.060 23:02:30 -- nvmf/common.sh@520 -- # local subsystem config 00:16:58.060 23:02:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:58.060 23:02:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:58.060 { 00:16:58.060 "params": { 00:16:58.060 "name": "Nvme$subsystem", 00:16:58.060 "trtype": "$TEST_TRANSPORT", 00:16:58.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.060 "adrfam": "ipv4", 00:16:58.060 "trsvcid": "$NVMF_PORT", 00:16:58.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.060 "hdgst": ${hdgst:-false}, 00:16:58.060 "ddgst": ${ddgst:-false} 00:16:58.060 }, 00:16:58.060 "method": "bdev_nvme_attach_controller" 00:16:58.060 } 00:16:58.060 EOF 00:16:58.060 )") 00:16:58.060 23:02:30 -- nvmf/common.sh@542 -- # cat 00:16:58.060 23:02:30 -- nvmf/common.sh@544 -- # jq . 00:16:58.060 23:02:30 -- nvmf/common.sh@545 -- # IFS=, 00:16:58.060 23:02:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:58.060 "params": { 00:16:58.060 "name": "Nvme0", 00:16:58.060 "trtype": "tcp", 00:16:58.060 "traddr": "10.0.0.2", 00:16:58.060 "adrfam": "ipv4", 00:16:58.060 "trsvcid": "4420", 00:16:58.061 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:58.061 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:58.061 "hdgst": false, 00:16:58.061 "ddgst": false 00:16:58.061 }, 00:16:58.061 "method": "bdev_nvme_attach_controller" 00:16:58.061 }' 00:16:58.061 [2024-07-24 23:02:30.485036] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:58.061 [2024-07-24 23:02:30.485089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3185834 ] 00:16:58.320 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.320 [2024-07-24 23:02:30.556307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.320 [2024-07-24 23:02:30.592881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.580 Running I/O for 1 seconds... 00:16:59.517 00:16:59.517 Latency(us) 00:16:59.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.517 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:59.517 Verification LBA range: start 0x0 length 0x400 00:16:59.517 Nvme0n1 : 1.01 3242.91 202.68 0.00 0.00 19506.90 815.92 32715.57 00:16:59.517 =================================================================================================================== 00:16:59.517 Total : 3242.91 202.68 0.00 0.00 19506.90 815.92 32715.57 00:16:59.777 23:02:31 -- target/host_management.sh@101 -- # stoptarget 00:16:59.777 23:02:31 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:59.777 23:02:31 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:59.777 23:02:31 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:59.777 23:02:31 -- target/host_management.sh@40 -- # nvmftestfini 00:16:59.777 23:02:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:59.777 23:02:31 -- nvmf/common.sh@116 -- # sync 00:16:59.777 23:02:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:59.777 23:02:31 -- nvmf/common.sh@119 -- # set +e 00:16:59.777 23:02:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:59.777 23:02:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:59.777 rmmod nvme_tcp 00:16:59.777 rmmod nvme_fabrics 00:16:59.777 rmmod nvme_keyring 00:16:59.777 23:02:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:59.777 23:02:32 -- nvmf/common.sh@123 -- # set -e 00:16:59.777 23:02:32 -- nvmf/common.sh@124 -- # return 0 00:16:59.777 23:02:32 -- nvmf/common.sh@477 -- # '[' -n 3185241 ']' 00:16:59.777 23:02:32 -- nvmf/common.sh@478 -- # killprocess 3185241 00:16:59.777 23:02:32 -- common/autotest_common.sh@926 -- # '[' -z 3185241 ']' 00:16:59.777 23:02:32 -- common/autotest_common.sh@930 -- # kill -0 3185241 00:16:59.777 23:02:32 -- common/autotest_common.sh@931 -- # uname 00:16:59.777 23:02:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:59.777 23:02:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3185241 00:16:59.777 23:02:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:59.777 23:02:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:59.777 23:02:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3185241' 00:16:59.777 killing process with pid 3185241 00:16:59.777 23:02:32 -- common/autotest_common.sh@945 -- # kill 3185241 00:16:59.777 23:02:32 -- common/autotest_common.sh@950 -- # wait 3185241 00:17:00.037 [2024-07-24 23:02:32.265956] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:00.037 23:02:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:00.037 23:02:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:00.037 23:02:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:00.037 23:02:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:00.037 23:02:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:00.037 23:02:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.037 23:02:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.037 23:02:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.943 23:02:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:01.943 00:17:01.944 real 0m6.913s 00:17:01.944 user 0m20.689s 00:17:01.944 sys 0m1.495s 00:17:01.944 23:02:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:01.944 23:02:34 -- common/autotest_common.sh@10 -- # set +x 00:17:01.944 ************************************ 00:17:01.944 END TEST nvmf_host_management 00:17:01.944 ************************************ 00:17:02.202 23:02:34 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:02.202 00:17:02.202 real 0m13.904s 00:17:02.202 user 0m22.557s 00:17:02.202 sys 0m6.675s 00:17:02.202 23:02:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:02.202 23:02:34 -- common/autotest_common.sh@10 -- # set +x 00:17:02.202 ************************************ 00:17:02.202 END TEST nvmf_host_management 00:17:02.202 ************************************ 00:17:02.202 23:02:34 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:02.202 23:02:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:02.202 23:02:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:02.202 23:02:34 -- common/autotest_common.sh@10 -- # set +x 00:17:02.202 ************************************ 00:17:02.202 START TEST nvmf_lvol 00:17:02.202 ************************************ 00:17:02.202 23:02:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:02.202 * Looking for test storage... 00:17:02.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:02.202 23:02:34 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:02.202 23:02:34 -- nvmf/common.sh@7 -- # uname -s 00:17:02.202 23:02:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.202 23:02:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.202 23:02:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.202 23:02:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.202 23:02:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.202 23:02:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.202 23:02:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.202 23:02:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.203 23:02:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.203 23:02:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.203 23:02:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:02.203 23:02:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:02.203 23:02:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.203 23:02:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.203 23:02:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:02.203 23:02:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:02.203 23:02:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.203 23:02:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.203 23:02:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.203 23:02:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.203 23:02:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.203 23:02:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.203 23:02:34 -- paths/export.sh@5 -- # export PATH 00:17:02.203 23:02:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.203 23:02:34 -- nvmf/common.sh@46 -- # : 0 00:17:02.203 23:02:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:02.203 23:02:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:02.203 23:02:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:02.203 23:02:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.203 23:02:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.203 23:02:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:02.203 23:02:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:02.203 23:02:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:02.203 23:02:34 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:02.203 23:02:34 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:02.203 23:02:34 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:02.203 23:02:34 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:02.203 23:02:34 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:02.203 23:02:34 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:02.203 23:02:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:02.203 23:02:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.203 23:02:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:02.203 23:02:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:02.203 23:02:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:02.203 23:02:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.203 23:02:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.203 23:02:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.203 23:02:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:02.203 23:02:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:02.203 23:02:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:02.203 23:02:34 -- common/autotest_common.sh@10 -- # set +x 00:17:08.777 23:02:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:08.777 23:02:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:08.777 23:02:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:08.777 23:02:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:08.777 23:02:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:08.777 23:02:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:08.777 23:02:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:08.777 23:02:40 -- nvmf/common.sh@294 -- # net_devs=() 00:17:08.777 23:02:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:08.777 23:02:40 -- nvmf/common.sh@295 -- # e810=() 00:17:08.777 23:02:40 -- nvmf/common.sh@295 -- # local -ga e810 00:17:08.777 23:02:40 -- nvmf/common.sh@296 -- # x722=() 00:17:08.777 23:02:40 -- nvmf/common.sh@296 -- # local -ga x722 00:17:08.777 23:02:40 -- nvmf/common.sh@297 -- # mlx=() 00:17:08.777 23:02:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:08.777 23:02:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.777 23:02:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.777 23:02:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.777 23:02:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.777 23:02:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.777 23:02:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.777 23:02:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.777 23:02:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.777 23:02:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.777 23:02:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.777 23:02:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.777 23:02:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:08.777 23:02:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:08.777 23:02:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:08.777 23:02:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:08.777 23:02:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:08.777 23:02:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:08.777 23:02:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:08.777 23:02:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:08.777 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:08.777 23:02:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:08.777 23:02:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:08.777 23:02:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.777 23:02:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.777 23:02:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:08.777 23:02:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:08.777 23:02:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:08.777 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:08.777 23:02:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:08.777 23:02:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:08.777 23:02:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.777 23:02:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.777 23:02:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:08.777 23:02:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:08.777 23:02:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:08.777 23:02:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:08.777 23:02:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:08.777 23:02:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.777 23:02:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:08.777 23:02:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.777 23:02:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:08.777 Found net devices under 0000:af:00.0: cvl_0_0 00:17:08.778 23:02:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.778 23:02:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:08.778 23:02:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.778 23:02:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:08.778 23:02:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.778 23:02:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:08.778 Found net devices under 0000:af:00.1: cvl_0_1 00:17:08.778 23:02:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.778 23:02:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:08.778 23:02:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:08.778 23:02:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:08.778 23:02:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:08.778 23:02:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:08.778 23:02:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.778 23:02:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.778 23:02:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.778 23:02:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:08.778 23:02:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.778 23:02:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.778 23:02:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:08.778 23:02:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.778 23:02:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.778 23:02:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:08.778 23:02:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:08.778 23:02:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.778 23:02:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.778 23:02:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.778 23:02:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.778 23:02:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:08.778 23:02:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.778 23:02:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.778 23:02:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.778 23:02:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:08.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:17:08.778 00:17:08.778 --- 10.0.0.2 ping statistics --- 00:17:08.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.778 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:17:08.778 23:02:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:17:08.778 00:17:08.778 --- 10.0.0.1 ping statistics --- 00:17:08.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.778 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:17:08.778 23:02:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.778 23:02:40 -- nvmf/common.sh@410 -- # return 0 00:17:08.778 23:02:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:08.778 23:02:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.778 23:02:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:08.778 23:02:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:08.778 23:02:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.778 23:02:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:08.778 23:02:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:08.778 23:02:40 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:08.778 23:02:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:08.778 23:02:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:08.778 23:02:40 -- common/autotest_common.sh@10 -- # set +x 00:17:08.778 23:02:40 -- nvmf/common.sh@469 -- # nvmfpid=3189642 00:17:08.778 23:02:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:08.778 23:02:40 -- nvmf/common.sh@470 -- # waitforlisten 3189642 00:17:08.778 23:02:40 -- common/autotest_common.sh@819 -- # '[' -z 3189642 ']' 00:17:08.778 23:02:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.778 23:02:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:08.778 23:02:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.778 23:02:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:08.778 23:02:40 -- common/autotest_common.sh@10 -- # set +x 00:17:08.778 [2024-07-24 23:02:40.864677] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:08.778 [2024-07-24 23:02:40.864730] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.778 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.778 [2024-07-24 23:02:40.940170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:08.778 [2024-07-24 23:02:40.976226] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:08.778 [2024-07-24 23:02:40.976340] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.778 [2024-07-24 23:02:40.976350] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.778 [2024-07-24 23:02:40.976359] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.778 [2024-07-24 23:02:40.976412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.778 [2024-07-24 23:02:40.976435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.778 [2024-07-24 23:02:40.976437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.347 23:02:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:09.347 23:02:41 -- common/autotest_common.sh@852 -- # return 0 00:17:09.347 23:02:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:09.347 23:02:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:09.347 23:02:41 -- common/autotest_common.sh@10 -- # set +x 00:17:09.347 23:02:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.347 23:02:41 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:09.605 [2024-07-24 23:02:41.844079] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.605 23:02:41 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:09.863 23:02:42 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:09.863 23:02:42 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:09.863 23:02:42 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:09.863 23:02:42 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:10.122 23:02:42 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:10.380 23:02:42 -- target/nvmf_lvol.sh@29 -- # lvs=9c270831-d545-4bc6-8d24-ecdd43b756e4 00:17:10.380 23:02:42 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9c270831-d545-4bc6-8d24-ecdd43b756e4 lvol 20 00:17:10.380 23:02:42 -- target/nvmf_lvol.sh@32 -- # lvol=ff8b963d-eec9-4736-b74b-8030e6c3d558 00:17:10.380 23:02:42 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:10.642 23:02:42 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ff8b963d-eec9-4736-b74b-8030e6c3d558 00:17:10.899 23:02:43 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:10.899 [2024-07-24 23:02:43.251020] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.899 23:02:43 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:11.158 23:02:43 -- target/nvmf_lvol.sh@42 -- # perf_pid=3190131 00:17:11.158 23:02:43 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:11.159 23:02:43 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:11.159 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.091 23:02:44 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ff8b963d-eec9-4736-b74b-8030e6c3d558 MY_SNAPSHOT 00:17:12.350 23:02:44 -- target/nvmf_lvol.sh@47 -- # snapshot=7e9796ad-ea9a-426d-b11a-9bd6d364d285 00:17:12.350 23:02:44 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ff8b963d-eec9-4736-b74b-8030e6c3d558 30 00:17:12.608 23:02:44 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7e9796ad-ea9a-426d-b11a-9bd6d364d285 MY_CLONE 00:17:12.866 23:02:45 -- target/nvmf_lvol.sh@49 -- # clone=2d9babf7-4ad3-43ac-9ecc-305b403632cf 00:17:12.866 23:02:45 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 2d9babf7-4ad3-43ac-9ecc-305b403632cf 00:17:13.125 23:02:45 -- target/nvmf_lvol.sh@53 -- # wait 3190131 00:17:23.169 Initializing NVMe Controllers 00:17:23.169 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:23.169 Controller IO queue size 128, less than required. 00:17:23.169 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:23.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:23.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:23.169 Initialization complete. Launching workers. 00:17:23.169 ======================================================== 00:17:23.169 Latency(us) 00:17:23.169 Device Information : IOPS MiB/s Average min max 00:17:23.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12845.60 50.18 9966.68 1835.82 78900.56 00:17:23.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12555.00 49.04 10195.92 3559.21 45886.09 00:17:23.169 ======================================================== 00:17:23.169 Total : 25400.59 99.22 10079.99 1835.82 78900.56 00:17:23.169 00:17:23.169 23:02:53 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:23.169 23:02:53 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ff8b963d-eec9-4736-b74b-8030e6c3d558 00:17:23.169 23:02:54 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9c270831-d545-4bc6-8d24-ecdd43b756e4 00:17:23.169 23:02:54 -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:23.169 23:02:54 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:23.169 23:02:54 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:23.169 23:02:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:23.169 23:02:54 -- nvmf/common.sh@116 -- # sync 00:17:23.169 23:02:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:23.169 23:02:54 -- nvmf/common.sh@119 -- # set +e 00:17:23.169 23:02:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:23.169 23:02:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:23.169 rmmod nvme_tcp 00:17:23.169 rmmod nvme_fabrics 00:17:23.169 rmmod nvme_keyring 00:17:23.169 23:02:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:23.169 23:02:54 -- nvmf/common.sh@123 -- # set -e 00:17:23.169 23:02:54 -- nvmf/common.sh@124 -- # return 0 00:17:23.169 23:02:54 -- nvmf/common.sh@477 -- # '[' -n 3189642 ']' 00:17:23.169 23:02:54 -- nvmf/common.sh@478 -- # killprocess 3189642 00:17:23.169 23:02:54 -- common/autotest_common.sh@926 -- # '[' -z 3189642 ']' 00:17:23.169 23:02:54 -- common/autotest_common.sh@930 -- # kill -0 3189642 00:17:23.169 23:02:54 -- common/autotest_common.sh@931 -- # uname 00:17:23.169 23:02:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:23.169 23:02:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3189642 00:17:23.169 23:02:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:23.169 23:02:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:23.170 23:02:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3189642' 00:17:23.170 killing process with pid 3189642 00:17:23.170 23:02:54 -- common/autotest_common.sh@945 -- # kill 3189642 00:17:23.170 23:02:54 -- common/autotest_common.sh@950 -- # wait 3189642 00:17:23.170 23:02:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:23.170 23:02:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:23.170 23:02:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:23.170 23:02:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:23.170 23:02:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:23.170 23:02:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.170 23:02:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.170 23:02:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.548 23:02:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:24.548 00:17:24.548 real 0m22.255s 00:17:24.548 user 1m1.544s 00:17:24.548 sys 0m9.480s 00:17:24.548 23:02:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:24.548 23:02:56 -- common/autotest_common.sh@10 -- # set +x 00:17:24.548 ************************************ 00:17:24.548 END TEST nvmf_lvol 00:17:24.548 ************************************ 00:17:24.548 23:02:56 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:24.548 23:02:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:24.548 23:02:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:24.548 23:02:56 -- common/autotest_common.sh@10 -- # set +x 00:17:24.548 ************************************ 00:17:24.548 START TEST nvmf_lvs_grow 00:17:24.548 ************************************ 00:17:24.548 23:02:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:24.548 * Looking for test storage... 00:17:24.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:24.548 23:02:56 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:24.548 23:02:56 -- nvmf/common.sh@7 -- # uname -s 00:17:24.548 23:02:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.548 23:02:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.548 23:02:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.548 23:02:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.548 23:02:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.548 23:02:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.548 23:02:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.548 23:02:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.548 23:02:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.548 23:02:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.548 23:02:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:24.548 23:02:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:24.548 23:02:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.548 23:02:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.548 23:02:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:24.548 23:02:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:24.548 23:02:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.548 23:02:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.548 23:02:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.548 23:02:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.548 23:02:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.549 23:02:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.549 23:02:56 -- paths/export.sh@5 -- # export PATH 00:17:24.549 23:02:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.549 23:02:56 -- nvmf/common.sh@46 -- # : 0 00:17:24.549 23:02:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:24.549 23:02:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:24.549 23:02:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:24.549 23:02:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.549 23:02:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.549 23:02:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:24.549 23:02:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:24.549 23:02:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:24.549 23:02:56 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.549 23:02:56 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:24.549 23:02:56 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:24.549 23:02:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:24.549 23:02:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.549 23:02:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:24.549 23:02:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:24.549 23:02:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:24.549 23:02:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.549 23:02:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.549 23:02:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.549 23:02:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:24.549 23:02:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:24.549 23:02:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:24.549 23:02:56 -- common/autotest_common.sh@10 -- # set +x 00:17:31.122 23:03:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:31.122 23:03:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:31.122 23:03:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:31.122 23:03:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:31.122 23:03:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:31.122 23:03:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:31.122 23:03:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:31.122 23:03:03 -- nvmf/common.sh@294 -- # net_devs=() 00:17:31.122 23:03:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:31.122 23:03:03 -- nvmf/common.sh@295 -- # e810=() 00:17:31.122 23:03:03 -- nvmf/common.sh@295 -- # local -ga e810 00:17:31.122 23:03:03 -- nvmf/common.sh@296 -- # x722=() 00:17:31.122 23:03:03 -- nvmf/common.sh@296 -- # local -ga x722 00:17:31.122 23:03:03 -- nvmf/common.sh@297 -- # mlx=() 00:17:31.122 23:03:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:31.122 23:03:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:31.122 23:03:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:31.122 23:03:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:31.122 23:03:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:31.122 23:03:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:31.122 23:03:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:31.122 23:03:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:31.122 23:03:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:31.122 23:03:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:31.122 23:03:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:31.122 23:03:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:31.122 23:03:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:31.122 23:03:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:31.122 23:03:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:31.122 23:03:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:31.122 23:03:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:31.122 23:03:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:31.122 23:03:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:31.122 23:03:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:31.122 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:31.122 23:03:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:31.122 23:03:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:31.122 23:03:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.122 23:03:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.122 23:03:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:31.122 23:03:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:31.122 23:03:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:31.122 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:31.122 23:03:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:31.122 23:03:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:31.122 23:03:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.122 23:03:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.122 23:03:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:31.122 23:03:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:31.122 23:03:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:31.122 23:03:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:31.122 23:03:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:31.122 23:03:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.122 23:03:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:31.122 23:03:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.122 23:03:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:31.122 Found net devices under 0000:af:00.0: cvl_0_0 00:17:31.122 23:03:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.122 23:03:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:31.122 23:03:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.122 23:03:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:31.122 23:03:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.122 23:03:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:31.122 Found net devices under 0000:af:00.1: cvl_0_1 00:17:31.122 23:03:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.122 23:03:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:31.122 23:03:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:31.122 23:03:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:31.122 23:03:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:31.122 23:03:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:31.122 23:03:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:31.122 23:03:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:31.122 23:03:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:31.122 23:03:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:31.122 23:03:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:31.122 23:03:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:31.122 23:03:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:31.122 23:03:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:31.122 23:03:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:31.122 23:03:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:31.122 23:03:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:31.122 23:03:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:31.122 23:03:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:31.122 23:03:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:31.122 23:03:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:31.122 23:03:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:31.122 23:03:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:31.381 23:03:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:31.381 23:03:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:31.381 23:03:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:31.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:17:31.381 00:17:31.381 --- 10.0.0.2 ping statistics --- 00:17:31.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.381 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:17:31.381 23:03:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:31.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:17:31.381 00:17:31.381 --- 10.0.0.1 ping statistics --- 00:17:31.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.381 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:17:31.381 23:03:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.381 23:03:03 -- nvmf/common.sh@410 -- # return 0 00:17:31.381 23:03:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:31.381 23:03:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.381 23:03:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:31.381 23:03:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:31.381 23:03:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.381 23:03:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:31.381 23:03:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:31.381 23:03:03 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:17:31.381 23:03:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:31.381 23:03:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:31.381 23:03:03 -- common/autotest_common.sh@10 -- # set +x 00:17:31.381 23:03:03 -- nvmf/common.sh@469 -- # nvmfpid=3195831 00:17:31.381 23:03:03 -- nvmf/common.sh@470 -- # waitforlisten 3195831 00:17:31.381 23:03:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:31.381 23:03:03 -- common/autotest_common.sh@819 -- # '[' -z 3195831 ']' 00:17:31.381 23:03:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.381 23:03:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:31.381 23:03:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.382 23:03:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:31.382 23:03:03 -- common/autotest_common.sh@10 -- # set +x 00:17:31.382 [2024-07-24 23:03:03.744244] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:31.382 [2024-07-24 23:03:03.744293] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.382 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.641 [2024-07-24 23:03:03.821172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.641 [2024-07-24 23:03:03.856797] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:31.641 [2024-07-24 23:03:03.856923] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.641 [2024-07-24 23:03:03.856933] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.641 [2024-07-24 23:03:03.856942] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.641 [2024-07-24 23:03:03.856962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.209 23:03:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:32.209 23:03:04 -- common/autotest_common.sh@852 -- # return 0 00:17:32.209 23:03:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:32.209 23:03:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:32.209 23:03:04 -- common/autotest_common.sh@10 -- # set +x 00:17:32.209 23:03:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.210 23:03:04 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:32.469 [2024-07-24 23:03:04.731802] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.469 23:03:04 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:17:32.469 23:03:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:32.469 23:03:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:32.469 23:03:04 -- common/autotest_common.sh@10 -- # set +x 00:17:32.469 ************************************ 00:17:32.469 START TEST lvs_grow_clean 00:17:32.469 ************************************ 00:17:32.469 23:03:04 -- common/autotest_common.sh@1104 -- # lvs_grow 00:17:32.469 23:03:04 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:32.469 23:03:04 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:32.469 23:03:04 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:32.469 23:03:04 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:32.469 23:03:04 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:32.469 23:03:04 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:32.469 23:03:04 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:32.469 23:03:04 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:32.469 23:03:04 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:32.728 23:03:04 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:32.728 23:03:04 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:32.728 23:03:05 -- target/nvmf_lvs_grow.sh@28 -- # lvs=dd1f885b-40f4-49ad-98c7-4d597ddee779 00:17:32.728 23:03:05 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd1f885b-40f4-49ad-98c7-4d597ddee779 00:17:32.728 23:03:05 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:32.988 23:03:05 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:32.988 23:03:05 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:32.988 23:03:05 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dd1f885b-40f4-49ad-98c7-4d597ddee779 lvol 150 00:17:33.247 23:03:05 -- target/nvmf_lvs_grow.sh@33 -- # lvol=49f8c52c-d634-4c10-8d7a-b33568b45c25 00:17:33.247 23:03:05 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:33.247 23:03:05 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:33.247 [2024-07-24 23:03:05.602318] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:33.248 [2024-07-24 23:03:05.602371] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:33.248 true 00:17:33.248 23:03:05 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd1f885b-40f4-49ad-98c7-4d597ddee779 00:17:33.248 23:03:05 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:33.507 23:03:05 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:33.507 23:03:05 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:33.765 23:03:05 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 49f8c52c-d634-4c10-8d7a-b33568b45c25 00:17:33.765 23:03:06 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:34.024 [2024-07-24 23:03:06.280365] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.024 23:03:06 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:34.284 23:03:06 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3196821 00:17:34.284 23:03:06 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:34.284 23:03:06 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:34.284 23:03:06 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3196821 /var/tmp/bdevperf.sock 00:17:34.284 23:03:06 -- common/autotest_common.sh@819 -- # '[' -z 3196821 ']' 00:17:34.284 23:03:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:34.284 23:03:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:34.284 23:03:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:34.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:34.284 23:03:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:34.284 23:03:06 -- common/autotest_common.sh@10 -- # set +x 00:17:34.284 [2024-07-24 23:03:06.506442] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:34.284 [2024-07-24 23:03:06.506492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3196821 ] 00:17:34.284 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.284 [2024-07-24 23:03:06.577480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.284 [2024-07-24 23:03:06.615070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.224 23:03:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:35.224 23:03:07 -- common/autotest_common.sh@852 -- # return 0 00:17:35.224 23:03:07 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:35.224 Nvme0n1 00:17:35.224 23:03:07 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:35.484 [ 00:17:35.484 { 00:17:35.484 "name": "Nvme0n1", 00:17:35.484 "aliases": [ 00:17:35.484 "49f8c52c-d634-4c10-8d7a-b33568b45c25" 00:17:35.484 ], 00:17:35.484 "product_name": "NVMe disk", 00:17:35.484 "block_size": 4096, 00:17:35.484 "num_blocks": 38912, 00:17:35.484 "uuid": "49f8c52c-d634-4c10-8d7a-b33568b45c25", 00:17:35.484 "assigned_rate_limits": { 00:17:35.484 "rw_ios_per_sec": 0, 00:17:35.484 "rw_mbytes_per_sec": 0, 00:17:35.484 "r_mbytes_per_sec": 0, 00:17:35.484 "w_mbytes_per_sec": 0 00:17:35.484 }, 00:17:35.484 "claimed": false, 00:17:35.484 "zoned": false, 00:17:35.484 "supported_io_types": { 00:17:35.484 "read": true, 00:17:35.484 "write": true, 00:17:35.484 "unmap": true, 00:17:35.484 "write_zeroes": true, 00:17:35.484 "flush": true, 00:17:35.484 "reset": true, 00:17:35.484 "compare": true, 00:17:35.484 "compare_and_write": true, 00:17:35.484 "abort": true, 00:17:35.484 "nvme_admin": true, 00:17:35.484 "nvme_io": true 00:17:35.484 }, 00:17:35.484 "driver_specific": { 00:17:35.484 "nvme": [ 00:17:35.484 { 00:17:35.484 "trid": { 00:17:35.484 "trtype": "TCP", 00:17:35.484 "adrfam": "IPv4", 00:17:35.484 "traddr": "10.0.0.2", 00:17:35.484 "trsvcid": "4420", 00:17:35.484 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:35.484 }, 00:17:35.484 "ctrlr_data": { 00:17:35.484 "cntlid": 1, 00:17:35.484 "vendor_id": "0x8086", 00:17:35.484 "model_number": "SPDK bdev Controller", 00:17:35.484 "serial_number": "SPDK0", 00:17:35.484 "firmware_revision": "24.01.1", 00:17:35.484 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:35.484 "oacs": { 00:17:35.484 "security": 0, 00:17:35.484 "format": 0, 00:17:35.484 "firmware": 0, 00:17:35.484 "ns_manage": 0 00:17:35.484 }, 00:17:35.484 "multi_ctrlr": true, 00:17:35.484 "ana_reporting": false 00:17:35.484 }, 00:17:35.484 "vs": { 00:17:35.484 "nvme_version": "1.3" 00:17:35.484 }, 00:17:35.484 "ns_data": { 00:17:35.484 "id": 1, 00:17:35.484 "can_share": true 00:17:35.484 } 00:17:35.484 } 00:17:35.484 ], 00:17:35.484 "mp_policy": "active_passive" 00:17:35.484 } 00:17:35.484 } 00:17:35.484 ] 00:17:35.484 23:03:07 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:35.484 23:03:07 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3197014 00:17:35.484 23:03:07 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:35.484 Running I/O for 10 seconds... 00:17:36.423 Latency(us) 00:17:36.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.424 Nvme0n1 : 1.00 23616.00 92.25 0.00 0.00 0.00 0.00 0.00 00:17:36.424 =================================================================================================================== 00:17:36.424 Total : 23616.00 92.25 0.00 0.00 0.00 0.00 0.00 00:17:36.424 00:17:37.362 23:03:09 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dd1f885b-40f4-49ad-98c7-4d597ddee779 00:17:37.362 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.362 Nvme0n1 : 2.00 23692.00 92.55 0.00 0.00 0.00 0.00 0.00 00:17:37.362 =================================================================================================================== 00:17:37.362 Total : 23692.00 92.55 0.00 0.00 0.00 0.00 0.00 00:17:37.362 00:17:37.623 true 00:17:37.623 23:03:09 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd1f885b-40f4-49ad-98c7-4d597ddee779 00:17:37.623 23:03:09 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:37.623 23:03:10 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:37.623 23:03:10 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:37.623 23:03:10 -- target/nvmf_lvs_grow.sh@65 -- # wait 3197014 00:17:38.561 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:38.561 Nvme0n1 : 3.00 23573.33 92.08 0.00 0.00 0.00 0.00 0.00 00:17:38.561 =================================================================================================================== 00:17:38.561 Total : 23573.33 92.08 0.00 0.00 0.00 0.00 0.00 00:17:38.561 00:17:39.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.499 Nvme0n1 : 4.00 23674.00 92.48 0.00 0.00 0.00 0.00 0.00 00:17:39.499 =================================================================================================================== 00:17:39.499 Total : 23674.00 92.48 0.00 0.00 0.00 0.00 0.00 00:17:39.499 00:17:40.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.436 Nvme0n1 : 5.00 23734.40 92.71 0.00 0.00 0.00 0.00 0.00 00:17:40.436 =================================================================================================================== 00:17:40.436 Total : 23734.40 92.71 0.00 0.00 0.00 0.00 0.00 00:17:40.436 00:17:41.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.424 Nvme0n1 : 6.00 23773.33 92.86 0.00 0.00 0.00 0.00 0.00 00:17:41.425 =================================================================================================================== 00:17:41.425 Total : 23773.33 92.86 0.00 0.00 0.00 0.00 0.00 00:17:41.425 00:17:42.363 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.363 Nvme0n1 : 7.00 23814.86 93.03 0.00 0.00 0.00 0.00 0.00 00:17:42.363 =================================================================================================================== 00:17:42.363 Total : 23814.86 93.03 0.00 0.00 0.00 0.00 0.00 00:17:42.363 00:17:43.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.740 Nvme0n1 : 8.00 23813.00 93.02 0.00 0.00 0.00 0.00 0.00 00:17:43.740 =================================================================================================================== 00:17:43.740 Total : 23813.00 93.02 0.00 0.00 0.00 0.00 0.00 00:17:43.740 00:17:44.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.677 Nvme0n1 : 9.00 23841.78 93.13 0.00 0.00 0.00 0.00 0.00 00:17:44.677 =================================================================================================================== 00:17:44.677 Total : 23841.78 93.13 0.00 0.00 0.00 0.00 0.00 00:17:44.677 00:17:45.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.614 Nvme0n1 : 10.00 23832.00 93.09 0.00 0.00 0.00 0.00 0.00 00:17:45.614 =================================================================================================================== 00:17:45.614 Total : 23832.00 93.09 0.00 0.00 0.00 0.00 0.00 00:17:45.614 00:17:45.614 00:17:45.614 Latency(us) 00:17:45.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.614 Nvme0n1 : 10.01 23832.74 93.10 0.00 0.00 5367.15 1428.68 7707.03 00:17:45.614 =================================================================================================================== 00:17:45.614 Total : 23832.74 93.10 0.00 0.00 5367.15 1428.68 7707.03 00:17:45.614 0 00:17:45.614 23:03:17 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3196821 00:17:45.614 23:03:17 -- common/autotest_common.sh@926 -- # '[' -z 3196821 ']' 00:17:45.614 23:03:17 -- common/autotest_common.sh@930 -- # kill -0 3196821 00:17:45.614 23:03:17 -- common/autotest_common.sh@931 -- # uname 00:17:45.614 23:03:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:45.614 23:03:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3196821 00:17:45.614 23:03:17 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:45.614 23:03:17 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:45.614 23:03:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3196821' 00:17:45.614 killing process with pid 3196821 00:17:45.614 23:03:17 -- common/autotest_common.sh@945 -- # kill 3196821 00:17:45.614 Received shutdown signal, test time was about 10.000000 seconds 00:17:45.614 00:17:45.614 Latency(us) 00:17:45.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.614 =================================================================================================================== 00:17:45.614 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:45.614 23:03:17 -- common/autotest_common.sh@950 -- # wait 3196821 00:17:45.614 23:03:18 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:45.873 23:03:18 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd1f885b-40f4-49ad-98c7-4d597ddee779 00:17:45.873 23:03:18 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:46.132 23:03:18 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:46.132 23:03:18 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:17:46.132 23:03:18 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:46.132 [2024-07-24 23:03:18.534233] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:46.392 23:03:18 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd1f885b-40f4-49ad-98c7-4d597ddee779 00:17:46.392 23:03:18 -- common/autotest_common.sh@640 -- # local es=0 00:17:46.392 23:03:18 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd1f885b-40f4-49ad-98c7-4d597ddee779 00:17:46.392 23:03:18 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:46.392 23:03:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:46.392 23:03:18 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:46.392 23:03:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:46.392 23:03:18 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:46.392 23:03:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:46.392 23:03:18 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:46.392 23:03:18 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:46.392 23:03:18 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd1f885b-40f4-49ad-98c7-4d597ddee779 00:17:46.392 request: 00:17:46.392 { 00:17:46.392 "uuid": "dd1f885b-40f4-49ad-98c7-4d597ddee779", 00:17:46.392 "method": "bdev_lvol_get_lvstores", 00:17:46.392 "req_id": 1 00:17:46.392 } 00:17:46.392 Got JSON-RPC error response 00:17:46.392 response: 00:17:46.392 { 00:17:46.392 "code": -19, 00:17:46.392 "message": "No such device" 00:17:46.392 } 00:17:46.392 23:03:18 -- common/autotest_common.sh@643 -- # es=1 00:17:46.392 23:03:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:46.392 23:03:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:46.392 23:03:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:46.392 23:03:18 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:46.651 aio_bdev 00:17:46.651 23:03:18 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 49f8c52c-d634-4c10-8d7a-b33568b45c25 00:17:46.651 23:03:18 -- common/autotest_common.sh@887 -- # local bdev_name=49f8c52c-d634-4c10-8d7a-b33568b45c25 00:17:46.651 23:03:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:46.651 23:03:18 -- common/autotest_common.sh@889 -- # local i 00:17:46.651 23:03:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:46.651 23:03:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:46.651 23:03:18 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:46.910 23:03:19 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 49f8c52c-d634-4c10-8d7a-b33568b45c25 -t 2000 00:17:46.910 [ 00:17:46.910 { 00:17:46.910 "name": "49f8c52c-d634-4c10-8d7a-b33568b45c25", 00:17:46.910 "aliases": [ 00:17:46.910 "lvs/lvol" 00:17:46.910 ], 00:17:46.910 "product_name": "Logical Volume", 00:17:46.910 "block_size": 4096, 00:17:46.910 "num_blocks": 38912, 00:17:46.910 "uuid": "49f8c52c-d634-4c10-8d7a-b33568b45c25", 00:17:46.910 "assigned_rate_limits": { 00:17:46.910 "rw_ios_per_sec": 0, 00:17:46.910 "rw_mbytes_per_sec": 0, 00:17:46.910 "r_mbytes_per_sec": 0, 00:17:46.910 "w_mbytes_per_sec": 0 00:17:46.910 }, 00:17:46.910 "claimed": false, 00:17:46.910 "zoned": false, 00:17:46.910 "supported_io_types": { 00:17:46.910 "read": true, 00:17:46.910 "write": true, 00:17:46.910 "unmap": true, 00:17:46.910 "write_zeroes": true, 00:17:46.910 "flush": false, 00:17:46.910 "reset": true, 00:17:46.910 "compare": false, 00:17:46.910 "compare_and_write": false, 00:17:46.910 "abort": false, 00:17:46.910 "nvme_admin": false, 00:17:46.910 "nvme_io": false 00:17:46.910 }, 00:17:46.910 "driver_specific": { 00:17:46.910 "lvol": { 00:17:46.910 "lvol_store_uuid": "dd1f885b-40f4-49ad-98c7-4d597ddee779", 00:17:46.910 "base_bdev": "aio_bdev", 00:17:46.910 "thin_provision": false, 00:17:46.910 "snapshot": false, 00:17:46.910 "clone": false, 00:17:46.910 "esnap_clone": false 00:17:46.910 } 00:17:46.910 } 00:17:46.910 } 00:17:46.910 ] 00:17:46.910 23:03:19 -- common/autotest_common.sh@895 -- # return 0 00:17:46.910 23:03:19 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd1f885b-40f4-49ad-98c7-4d597ddee779 00:17:46.910 23:03:19 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:47.170 23:03:19 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:47.170 23:03:19 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd1f885b-40f4-49ad-98c7-4d597ddee779 00:17:47.170 23:03:19 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:47.170 23:03:19 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:47.170 23:03:19 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 49f8c52c-d634-4c10-8d7a-b33568b45c25 00:17:47.429 23:03:19 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dd1f885b-40f4-49ad-98c7-4d597ddee779 00:17:47.690 23:03:19 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:47.690 23:03:20 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:47.690 00:17:47.690 real 0m15.345s 00:17:47.690 user 0m14.400s 00:17:47.690 sys 0m1.996s 00:17:47.690 23:03:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:47.690 23:03:20 -- common/autotest_common.sh@10 -- # set +x 00:17:47.690 ************************************ 00:17:47.690 END TEST lvs_grow_clean 00:17:47.690 ************************************ 00:17:47.949 23:03:20 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:47.949 23:03:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:47.949 23:03:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:47.949 23:03:20 -- common/autotest_common.sh@10 -- # set +x 00:17:47.949 ************************************ 00:17:47.949 START TEST lvs_grow_dirty 00:17:47.949 ************************************ 00:17:47.949 23:03:20 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:17:47.949 23:03:20 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:47.949 23:03:20 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:47.949 23:03:20 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:47.949 23:03:20 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:47.949 23:03:20 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:47.949 23:03:20 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:47.949 23:03:20 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:47.949 23:03:20 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:47.949 23:03:20 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:47.949 23:03:20 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:47.949 23:03:20 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:48.208 23:03:20 -- target/nvmf_lvs_grow.sh@28 -- # lvs=6f077011-c5bd-42ec-9420-f49243f705ea 00:17:48.208 23:03:20 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f077011-c5bd-42ec-9420-f49243f705ea 00:17:48.208 23:03:20 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:48.467 23:03:20 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:48.467 23:03:20 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:48.467 23:03:20 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6f077011-c5bd-42ec-9420-f49243f705ea lvol 150 00:17:48.467 23:03:20 -- target/nvmf_lvs_grow.sh@33 -- # lvol=01f12c7f-e67f-448c-9a47-143bdfca0b95 00:17:48.467 23:03:20 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:48.467 23:03:20 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:48.726 [2024-07-24 23:03:21.003821] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:48.726 [2024-07-24 23:03:21.003873] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:48.726 true 00:17:48.726 23:03:21 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f077011-c5bd-42ec-9420-f49243f705ea 00:17:48.726 23:03:21 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:48.986 23:03:21 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:48.986 23:03:21 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:48.986 23:03:21 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 01f12c7f-e67f-448c-9a47-143bdfca0b95 00:17:49.245 23:03:21 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:49.245 23:03:21 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:49.505 23:03:21 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3199458 00:17:49.505 23:03:21 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:49.505 23:03:21 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:49.505 23:03:21 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3199458 /var/tmp/bdevperf.sock 00:17:49.505 23:03:21 -- common/autotest_common.sh@819 -- # '[' -z 3199458 ']' 00:17:49.505 23:03:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:49.505 23:03:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:49.505 23:03:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:49.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:49.505 23:03:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:49.505 23:03:21 -- common/autotest_common.sh@10 -- # set +x 00:17:49.505 [2024-07-24 23:03:21.869935] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:49.505 [2024-07-24 23:03:21.869989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3199458 ] 00:17:49.505 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.764 [2024-07-24 23:03:21.942591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.764 [2024-07-24 23:03:21.980363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.332 23:03:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:50.332 23:03:22 -- common/autotest_common.sh@852 -- # return 0 00:17:50.332 23:03:22 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:50.591 Nvme0n1 00:17:50.591 23:03:22 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:50.850 [ 00:17:50.850 { 00:17:50.850 "name": "Nvme0n1", 00:17:50.850 "aliases": [ 00:17:50.850 "01f12c7f-e67f-448c-9a47-143bdfca0b95" 00:17:50.850 ], 00:17:50.850 "product_name": "NVMe disk", 00:17:50.850 "block_size": 4096, 00:17:50.850 "num_blocks": 38912, 00:17:50.850 "uuid": "01f12c7f-e67f-448c-9a47-143bdfca0b95", 00:17:50.850 "assigned_rate_limits": { 00:17:50.850 "rw_ios_per_sec": 0, 00:17:50.850 "rw_mbytes_per_sec": 0, 00:17:50.850 "r_mbytes_per_sec": 0, 00:17:50.850 "w_mbytes_per_sec": 0 00:17:50.850 }, 00:17:50.850 "claimed": false, 00:17:50.850 "zoned": false, 00:17:50.850 "supported_io_types": { 00:17:50.850 "read": true, 00:17:50.850 "write": true, 00:17:50.850 "unmap": true, 00:17:50.850 "write_zeroes": true, 00:17:50.850 "flush": true, 00:17:50.850 "reset": true, 00:17:50.850 "compare": true, 00:17:50.850 "compare_and_write": true, 00:17:50.850 "abort": true, 00:17:50.850 "nvme_admin": true, 00:17:50.850 "nvme_io": true 00:17:50.850 }, 00:17:50.850 "driver_specific": { 00:17:50.850 "nvme": [ 00:17:50.850 { 00:17:50.850 "trid": { 00:17:50.850 "trtype": "TCP", 00:17:50.850 "adrfam": "IPv4", 00:17:50.850 "traddr": "10.0.0.2", 00:17:50.850 "trsvcid": "4420", 00:17:50.850 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:50.850 }, 00:17:50.850 "ctrlr_data": { 00:17:50.850 "cntlid": 1, 00:17:50.850 "vendor_id": "0x8086", 00:17:50.850 "model_number": "SPDK bdev Controller", 00:17:50.850 "serial_number": "SPDK0", 00:17:50.850 "firmware_revision": "24.01.1", 00:17:50.850 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:50.850 "oacs": { 00:17:50.850 "security": 0, 00:17:50.850 "format": 0, 00:17:50.850 "firmware": 0, 00:17:50.850 "ns_manage": 0 00:17:50.850 }, 00:17:50.850 "multi_ctrlr": true, 00:17:50.850 "ana_reporting": false 00:17:50.850 }, 00:17:50.850 "vs": { 00:17:50.850 "nvme_version": "1.3" 00:17:50.850 }, 00:17:50.850 "ns_data": { 00:17:50.850 "id": 1, 00:17:50.850 "can_share": true 00:17:50.850 } 00:17:50.850 } 00:17:50.850 ], 00:17:50.850 "mp_policy": "active_passive" 00:17:50.850 } 00:17:50.850 } 00:17:50.850 ] 00:17:50.850 23:03:23 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3199585 00:17:50.850 23:03:23 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:50.850 23:03:23 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:50.850 Running I/O for 10 seconds... 00:17:51.786 Latency(us) 00:17:51.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:51.786 Nvme0n1 : 1.00 24368.00 95.19 0.00 0.00 0.00 0.00 0.00 00:17:51.786 =================================================================================================================== 00:17:51.786 Total : 24368.00 95.19 0.00 0.00 0.00 0.00 0.00 00:17:51.786 00:17:52.723 23:03:25 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6f077011-c5bd-42ec-9420-f49243f705ea 00:17:52.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.982 Nvme0n1 : 2.00 24544.50 95.88 0.00 0.00 0.00 0.00 0.00 00:17:52.982 =================================================================================================================== 00:17:52.982 Total : 24544.50 95.88 0.00 0.00 0.00 0.00 0.00 00:17:52.982 00:17:52.982 true 00:17:52.982 23:03:25 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f077011-c5bd-42ec-9420-f49243f705ea 00:17:52.982 23:03:25 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:53.241 23:03:25 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:53.241 23:03:25 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:53.241 23:03:25 -- target/nvmf_lvs_grow.sh@65 -- # wait 3199585 00:17:53.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.809 Nvme0n1 : 3.00 24634.33 96.23 0.00 0.00 0.00 0.00 0.00 00:17:53.809 =================================================================================================================== 00:17:53.809 Total : 24634.33 96.23 0.00 0.00 0.00 0.00 0.00 00:17:53.809 00:17:54.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:54.747 Nvme0n1 : 4.00 24715.75 96.55 0.00 0.00 0.00 0.00 0.00 00:17:54.747 =================================================================================================================== 00:17:54.747 Total : 24715.75 96.55 0.00 0.00 0.00 0.00 0.00 00:17:54.747 00:17:55.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:55.755 Nvme0n1 : 5.00 24756.20 96.70 0.00 0.00 0.00 0.00 0.00 00:17:55.755 =================================================================================================================== 00:17:55.755 Total : 24756.20 96.70 0.00 0.00 0.00 0.00 0.00 00:17:55.755 00:17:57.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:57.134 Nvme0n1 : 6.00 24768.83 96.75 0.00 0.00 0.00 0.00 0.00 00:17:57.134 =================================================================================================================== 00:17:57.134 Total : 24768.83 96.75 0.00 0.00 0.00 0.00 0.00 00:17:57.134 00:17:58.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:58.070 Nvme0n1 : 7.00 24796.00 96.86 0.00 0.00 0.00 0.00 0.00 00:17:58.070 =================================================================================================================== 00:17:58.070 Total : 24796.00 96.86 0.00 0.00 0.00 0.00 0.00 00:17:58.070 00:17:59.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.006 Nvme0n1 : 8.00 24834.50 97.01 0.00 0.00 0.00 0.00 0.00 00:17:59.006 =================================================================================================================== 00:17:59.006 Total : 24834.50 97.01 0.00 0.00 0.00 0.00 0.00 00:17:59.006 00:17:59.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.944 Nvme0n1 : 9.00 24860.67 97.11 0.00 0.00 0.00 0.00 0.00 00:17:59.944 =================================================================================================================== 00:17:59.944 Total : 24860.67 97.11 0.00 0.00 0.00 0.00 0.00 00:17:59.944 00:18:00.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:00.882 Nvme0n1 : 10.00 24882.50 97.20 0.00 0.00 0.00 0.00 0.00 00:18:00.882 =================================================================================================================== 00:18:00.882 Total : 24882.50 97.20 0.00 0.00 0.00 0.00 0.00 00:18:00.882 00:18:00.882 00:18:00.883 Latency(us) 00:18:00.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:00.883 Nvme0n1 : 10.00 24883.18 97.20 0.00 0.00 5140.97 3198.16 17091.79 00:18:00.883 =================================================================================================================== 00:18:00.883 Total : 24883.18 97.20 0.00 0.00 5140.97 3198.16 17091.79 00:18:00.883 0 00:18:00.883 23:03:33 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3199458 00:18:00.883 23:03:33 -- common/autotest_common.sh@926 -- # '[' -z 3199458 ']' 00:18:00.883 23:03:33 -- common/autotest_common.sh@930 -- # kill -0 3199458 00:18:00.883 23:03:33 -- common/autotest_common.sh@931 -- # uname 00:18:00.883 23:03:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:00.883 23:03:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3199458 00:18:00.883 23:03:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:00.883 23:03:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:00.883 23:03:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3199458' 00:18:00.883 killing process with pid 3199458 00:18:00.883 23:03:33 -- common/autotest_common.sh@945 -- # kill 3199458 00:18:00.883 Received shutdown signal, test time was about 10.000000 seconds 00:18:00.883 00:18:00.883 Latency(us) 00:18:00.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.883 =================================================================================================================== 00:18:00.883 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:00.883 23:03:33 -- common/autotest_common.sh@950 -- # wait 3199458 00:18:01.142 23:03:33 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:01.401 23:03:33 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f077011-c5bd-42ec-9420-f49243f705ea 00:18:01.401 23:03:33 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:01.401 23:03:33 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:01.401 23:03:33 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:18:01.401 23:03:33 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 3195831 00:18:01.401 23:03:33 -- target/nvmf_lvs_grow.sh@74 -- # wait 3195831 00:18:01.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 3195831 Killed "${NVMF_APP[@]}" "$@" 00:18:01.401 23:03:33 -- target/nvmf_lvs_grow.sh@74 -- # true 00:18:01.401 23:03:33 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:18:01.401 23:03:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:01.401 23:03:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:01.401 23:03:33 -- common/autotest_common.sh@10 -- # set +x 00:18:01.401 23:03:33 -- nvmf/common.sh@469 -- # nvmfpid=3201450 00:18:01.401 23:03:33 -- nvmf/common.sh@470 -- # waitforlisten 3201450 00:18:01.401 23:03:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:01.401 23:03:33 -- common/autotest_common.sh@819 -- # '[' -z 3201450 ']' 00:18:01.401 23:03:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.401 23:03:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:01.401 23:03:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.661 23:03:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:01.661 23:03:33 -- common/autotest_common.sh@10 -- # set +x 00:18:01.661 [2024-07-24 23:03:33.881414] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:01.661 [2024-07-24 23:03:33.881467] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.661 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.661 [2024-07-24 23:03:33.959753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.661 [2024-07-24 23:03:33.996642] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:01.661 [2024-07-24 23:03:33.996766] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.661 [2024-07-24 23:03:33.996777] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.661 [2024-07-24 23:03:33.996786] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.661 [2024-07-24 23:03:33.996804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.229 23:03:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:02.488 23:03:34 -- common/autotest_common.sh@852 -- # return 0 00:18:02.488 23:03:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:02.488 23:03:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:02.488 23:03:34 -- common/autotest_common.sh@10 -- # set +x 00:18:02.488 23:03:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.488 23:03:34 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:02.488 [2024-07-24 23:03:34.856177] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:02.488 [2024-07-24 23:03:34.856256] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:02.488 [2024-07-24 23:03:34.856281] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:02.488 23:03:34 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:18:02.488 23:03:34 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 01f12c7f-e67f-448c-9a47-143bdfca0b95 00:18:02.488 23:03:34 -- common/autotest_common.sh@887 -- # local bdev_name=01f12c7f-e67f-448c-9a47-143bdfca0b95 00:18:02.488 23:03:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:02.488 23:03:34 -- common/autotest_common.sh@889 -- # local i 00:18:02.488 23:03:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:02.488 23:03:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:02.488 23:03:34 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:02.747 23:03:35 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 01f12c7f-e67f-448c-9a47-143bdfca0b95 -t 2000 00:18:03.006 [ 00:18:03.006 { 00:18:03.006 "name": "01f12c7f-e67f-448c-9a47-143bdfca0b95", 00:18:03.006 "aliases": [ 00:18:03.006 "lvs/lvol" 00:18:03.006 ], 00:18:03.006 "product_name": "Logical Volume", 00:18:03.006 "block_size": 4096, 00:18:03.006 "num_blocks": 38912, 00:18:03.006 "uuid": "01f12c7f-e67f-448c-9a47-143bdfca0b95", 00:18:03.006 "assigned_rate_limits": { 00:18:03.006 "rw_ios_per_sec": 0, 00:18:03.006 "rw_mbytes_per_sec": 0, 00:18:03.006 "r_mbytes_per_sec": 0, 00:18:03.006 "w_mbytes_per_sec": 0 00:18:03.006 }, 00:18:03.006 "claimed": false, 00:18:03.006 "zoned": false, 00:18:03.006 "supported_io_types": { 00:18:03.006 "read": true, 00:18:03.006 "write": true, 00:18:03.006 "unmap": true, 00:18:03.006 "write_zeroes": true, 00:18:03.006 "flush": false, 00:18:03.006 "reset": true, 00:18:03.006 "compare": false, 00:18:03.006 "compare_and_write": false, 00:18:03.006 "abort": false, 00:18:03.006 "nvme_admin": false, 00:18:03.006 "nvme_io": false 00:18:03.006 }, 00:18:03.006 "driver_specific": { 00:18:03.006 "lvol": { 00:18:03.006 "lvol_store_uuid": "6f077011-c5bd-42ec-9420-f49243f705ea", 00:18:03.006 "base_bdev": "aio_bdev", 00:18:03.006 "thin_provision": false, 00:18:03.006 "snapshot": false, 00:18:03.006 "clone": false, 00:18:03.006 "esnap_clone": false 00:18:03.006 } 00:18:03.006 } 00:18:03.006 } 00:18:03.006 ] 00:18:03.006 23:03:35 -- common/autotest_common.sh@895 -- # return 0 00:18:03.006 23:03:35 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f077011-c5bd-42ec-9420-f49243f705ea 00:18:03.006 23:03:35 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:18:03.006 23:03:35 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:18:03.006 23:03:35 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f077011-c5bd-42ec-9420-f49243f705ea 00:18:03.006 23:03:35 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:18:03.264 23:03:35 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:18:03.264 23:03:35 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:03.264 [2024-07-24 23:03:35.652395] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:03.264 23:03:35 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f077011-c5bd-42ec-9420-f49243f705ea 00:18:03.264 23:03:35 -- common/autotest_common.sh@640 -- # local es=0 00:18:03.265 23:03:35 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f077011-c5bd-42ec-9420-f49243f705ea 00:18:03.265 23:03:35 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:03.265 23:03:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:03.265 23:03:35 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:03.265 23:03:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:03.265 23:03:35 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:03.265 23:03:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:03.265 23:03:35 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:03.265 23:03:35 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:03.265 23:03:35 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f077011-c5bd-42ec-9420-f49243f705ea 00:18:03.523 request: 00:18:03.523 { 00:18:03.523 "uuid": "6f077011-c5bd-42ec-9420-f49243f705ea", 00:18:03.523 "method": "bdev_lvol_get_lvstores", 00:18:03.523 "req_id": 1 00:18:03.523 } 00:18:03.523 Got JSON-RPC error response 00:18:03.523 response: 00:18:03.523 { 00:18:03.523 "code": -19, 00:18:03.523 "message": "No such device" 00:18:03.523 } 00:18:03.523 23:03:35 -- common/autotest_common.sh@643 -- # es=1 00:18:03.523 23:03:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:03.523 23:03:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:03.523 23:03:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:03.524 23:03:35 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:03.783 aio_bdev 00:18:03.783 23:03:36 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 01f12c7f-e67f-448c-9a47-143bdfca0b95 00:18:03.783 23:03:36 -- common/autotest_common.sh@887 -- # local bdev_name=01f12c7f-e67f-448c-9a47-143bdfca0b95 00:18:03.783 23:03:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:03.783 23:03:36 -- common/autotest_common.sh@889 -- # local i 00:18:03.783 23:03:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:03.783 23:03:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:03.783 23:03:36 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:03.783 23:03:36 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 01f12c7f-e67f-448c-9a47-143bdfca0b95 -t 2000 00:18:04.042 [ 00:18:04.042 { 00:18:04.042 "name": "01f12c7f-e67f-448c-9a47-143bdfca0b95", 00:18:04.042 "aliases": [ 00:18:04.042 "lvs/lvol" 00:18:04.042 ], 00:18:04.042 "product_name": "Logical Volume", 00:18:04.042 "block_size": 4096, 00:18:04.042 "num_blocks": 38912, 00:18:04.042 "uuid": "01f12c7f-e67f-448c-9a47-143bdfca0b95", 00:18:04.042 "assigned_rate_limits": { 00:18:04.042 "rw_ios_per_sec": 0, 00:18:04.042 "rw_mbytes_per_sec": 0, 00:18:04.042 "r_mbytes_per_sec": 0, 00:18:04.042 "w_mbytes_per_sec": 0 00:18:04.042 }, 00:18:04.042 "claimed": false, 00:18:04.042 "zoned": false, 00:18:04.042 "supported_io_types": { 00:18:04.042 "read": true, 00:18:04.042 "write": true, 00:18:04.042 "unmap": true, 00:18:04.042 "write_zeroes": true, 00:18:04.042 "flush": false, 00:18:04.042 "reset": true, 00:18:04.042 "compare": false, 00:18:04.042 "compare_and_write": false, 00:18:04.042 "abort": false, 00:18:04.042 "nvme_admin": false, 00:18:04.042 "nvme_io": false 00:18:04.042 }, 00:18:04.042 "driver_specific": { 00:18:04.042 "lvol": { 00:18:04.042 "lvol_store_uuid": "6f077011-c5bd-42ec-9420-f49243f705ea", 00:18:04.042 "base_bdev": "aio_bdev", 00:18:04.042 "thin_provision": false, 00:18:04.042 "snapshot": false, 00:18:04.042 "clone": false, 00:18:04.042 "esnap_clone": false 00:18:04.042 } 00:18:04.042 } 00:18:04.042 } 00:18:04.042 ] 00:18:04.042 23:03:36 -- common/autotest_common.sh@895 -- # return 0 00:18:04.042 23:03:36 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f077011-c5bd-42ec-9420-f49243f705ea 00:18:04.042 23:03:36 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:04.301 23:03:36 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:04.301 23:03:36 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f077011-c5bd-42ec-9420-f49243f705ea 00:18:04.301 23:03:36 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:04.301 23:03:36 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:04.301 23:03:36 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 01f12c7f-e67f-448c-9a47-143bdfca0b95 00:18:04.560 23:03:36 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6f077011-c5bd-42ec-9420-f49243f705ea 00:18:04.819 23:03:36 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:04.819 23:03:37 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:04.819 00:18:04.819 real 0m17.035s 00:18:04.819 user 0m42.588s 00:18:04.819 sys 0m4.878s 00:18:04.819 23:03:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:04.819 23:03:37 -- common/autotest_common.sh@10 -- # set +x 00:18:04.819 ************************************ 00:18:04.819 END TEST lvs_grow_dirty 00:18:04.819 ************************************ 00:18:04.819 23:03:37 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:04.819 23:03:37 -- common/autotest_common.sh@796 -- # type=--id 00:18:04.819 23:03:37 -- common/autotest_common.sh@797 -- # id=0 00:18:04.819 23:03:37 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:18:04.819 23:03:37 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:04.819 23:03:37 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:18:04.819 23:03:37 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:18:04.819 23:03:37 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:18:04.819 23:03:37 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:04.819 nvmf_trace.0 00:18:05.078 23:03:37 -- common/autotest_common.sh@811 -- # return 0 00:18:05.078 23:03:37 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:05.078 23:03:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:05.078 23:03:37 -- nvmf/common.sh@116 -- # sync 00:18:05.078 23:03:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:05.078 23:03:37 -- nvmf/common.sh@119 -- # set +e 00:18:05.078 23:03:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:05.078 23:03:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:05.078 rmmod nvme_tcp 00:18:05.078 rmmod nvme_fabrics 00:18:05.078 rmmod nvme_keyring 00:18:05.078 23:03:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:05.078 23:03:37 -- nvmf/common.sh@123 -- # set -e 00:18:05.078 23:03:37 -- nvmf/common.sh@124 -- # return 0 00:18:05.078 23:03:37 -- nvmf/common.sh@477 -- # '[' -n 3201450 ']' 00:18:05.078 23:03:37 -- nvmf/common.sh@478 -- # killprocess 3201450 00:18:05.078 23:03:37 -- common/autotest_common.sh@926 -- # '[' -z 3201450 ']' 00:18:05.078 23:03:37 -- common/autotest_common.sh@930 -- # kill -0 3201450 00:18:05.078 23:03:37 -- common/autotest_common.sh@931 -- # uname 00:18:05.078 23:03:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:05.078 23:03:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3201450 00:18:05.078 23:03:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:05.078 23:03:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:05.078 23:03:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3201450' 00:18:05.078 killing process with pid 3201450 00:18:05.078 23:03:37 -- common/autotest_common.sh@945 -- # kill 3201450 00:18:05.078 23:03:37 -- common/autotest_common.sh@950 -- # wait 3201450 00:18:05.336 23:03:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:05.336 23:03:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:05.336 23:03:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:05.336 23:03:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:05.336 23:03:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:05.336 23:03:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.336 23:03:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.336 23:03:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.244 23:03:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:07.244 00:18:07.244 real 0m42.869s 00:18:07.244 user 1m2.900s 00:18:07.244 sys 0m12.494s 00:18:07.244 23:03:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:07.244 23:03:39 -- common/autotest_common.sh@10 -- # set +x 00:18:07.244 ************************************ 00:18:07.244 END TEST nvmf_lvs_grow 00:18:07.244 ************************************ 00:18:07.504 23:03:39 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:07.504 23:03:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:07.504 23:03:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:07.504 23:03:39 -- common/autotest_common.sh@10 -- # set +x 00:18:07.504 ************************************ 00:18:07.504 START TEST nvmf_bdev_io_wait 00:18:07.504 ************************************ 00:18:07.504 23:03:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:07.504 * Looking for test storage... 00:18:07.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:07.504 23:03:39 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:07.504 23:03:39 -- nvmf/common.sh@7 -- # uname -s 00:18:07.504 23:03:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.504 23:03:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.504 23:03:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.504 23:03:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.504 23:03:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.504 23:03:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.504 23:03:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.504 23:03:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.504 23:03:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.504 23:03:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.504 23:03:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:07.504 23:03:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:07.504 23:03:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.504 23:03:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.504 23:03:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:07.504 23:03:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:07.504 23:03:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.504 23:03:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.504 23:03:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.504 23:03:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.504 23:03:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.504 23:03:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.504 23:03:39 -- paths/export.sh@5 -- # export PATH 00:18:07.504 23:03:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.504 23:03:39 -- nvmf/common.sh@46 -- # : 0 00:18:07.504 23:03:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:07.504 23:03:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:07.504 23:03:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:07.504 23:03:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.504 23:03:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.504 23:03:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:07.504 23:03:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:07.504 23:03:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:07.504 23:03:39 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:07.504 23:03:39 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:07.504 23:03:39 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:07.504 23:03:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:07.504 23:03:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.504 23:03:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:07.504 23:03:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:07.504 23:03:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:07.504 23:03:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.504 23:03:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.504 23:03:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.504 23:03:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:07.504 23:03:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:07.504 23:03:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:07.504 23:03:39 -- common/autotest_common.sh@10 -- # set +x 00:18:14.111 23:03:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:14.111 23:03:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:14.111 23:03:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:14.111 23:03:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:14.111 23:03:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:14.111 23:03:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:14.111 23:03:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:14.111 23:03:46 -- nvmf/common.sh@294 -- # net_devs=() 00:18:14.111 23:03:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:14.111 23:03:46 -- nvmf/common.sh@295 -- # e810=() 00:18:14.111 23:03:46 -- nvmf/common.sh@295 -- # local -ga e810 00:18:14.111 23:03:46 -- nvmf/common.sh@296 -- # x722=() 00:18:14.111 23:03:46 -- nvmf/common.sh@296 -- # local -ga x722 00:18:14.111 23:03:46 -- nvmf/common.sh@297 -- # mlx=() 00:18:14.111 23:03:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:14.111 23:03:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:14.111 23:03:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:14.111 23:03:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:14.111 23:03:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:14.111 23:03:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:14.111 23:03:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:14.111 23:03:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:14.111 23:03:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:14.111 23:03:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:14.111 23:03:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:14.111 23:03:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:14.111 23:03:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:14.111 23:03:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:14.111 23:03:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:14.111 23:03:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:14.111 23:03:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:14.111 23:03:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:14.111 23:03:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:14.111 23:03:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:14.111 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:14.111 23:03:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:14.111 23:03:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:14.111 23:03:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:14.111 23:03:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:14.111 23:03:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:14.111 23:03:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:14.111 23:03:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:14.111 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:14.111 23:03:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:14.111 23:03:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:14.111 23:03:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:14.111 23:03:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:14.111 23:03:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:14.111 23:03:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:14.111 23:03:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:14.111 23:03:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:14.111 23:03:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:14.111 23:03:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:14.111 23:03:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:14.111 23:03:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:14.111 23:03:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:14.111 Found net devices under 0000:af:00.0: cvl_0_0 00:18:14.111 23:03:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:14.111 23:03:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:14.111 23:03:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:14.111 23:03:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:14.111 23:03:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:14.111 23:03:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:14.111 Found net devices under 0000:af:00.1: cvl_0_1 00:18:14.111 23:03:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:14.111 23:03:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:14.111 23:03:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:14.111 23:03:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:14.111 23:03:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:14.111 23:03:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:14.111 23:03:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.111 23:03:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:14.111 23:03:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:14.111 23:03:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:14.111 23:03:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:14.111 23:03:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:14.111 23:03:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:14.111 23:03:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:14.111 23:03:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.111 23:03:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:14.111 23:03:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:14.111 23:03:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:14.111 23:03:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:14.111 23:03:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:14.111 23:03:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:14.111 23:03:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:14.111 23:03:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:14.111 23:03:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:14.111 23:03:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:14.111 23:03:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:14.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:18:14.111 00:18:14.111 --- 10.0.0.2 ping statistics --- 00:18:14.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.111 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:18:14.111 23:03:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:14.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:18:14.111 00:18:14.111 --- 10.0.0.1 ping statistics --- 00:18:14.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.111 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:18:14.111 23:03:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.111 23:03:46 -- nvmf/common.sh@410 -- # return 0 00:18:14.111 23:03:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:14.111 23:03:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.111 23:03:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:14.111 23:03:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:14.111 23:03:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.111 23:03:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:14.111 23:03:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:14.371 23:03:46 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:14.371 23:03:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:14.371 23:03:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:14.371 23:03:46 -- common/autotest_common.sh@10 -- # set +x 00:18:14.371 23:03:46 -- nvmf/common.sh@469 -- # nvmfpid=3205749 00:18:14.371 23:03:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:14.371 23:03:46 -- nvmf/common.sh@470 -- # waitforlisten 3205749 00:18:14.371 23:03:46 -- common/autotest_common.sh@819 -- # '[' -z 3205749 ']' 00:18:14.371 23:03:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.371 23:03:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:14.371 23:03:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.371 23:03:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:14.371 23:03:46 -- common/autotest_common.sh@10 -- # set +x 00:18:14.371 [2024-07-24 23:03:46.612726] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:14.371 [2024-07-24 23:03:46.612773] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.371 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.371 [2024-07-24 23:03:46.685180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:14.371 [2024-07-24 23:03:46.724905] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:14.371 [2024-07-24 23:03:46.725015] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.371 [2024-07-24 23:03:46.725025] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.371 [2024-07-24 23:03:46.725035] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.371 [2024-07-24 23:03:46.725081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.371 [2024-07-24 23:03:46.725179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.371 [2024-07-24 23:03:46.725202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:14.371 [2024-07-24 23:03:46.725204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.308 23:03:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:15.308 23:03:47 -- common/autotest_common.sh@852 -- # return 0 00:18:15.308 23:03:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:15.308 23:03:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:15.308 23:03:47 -- common/autotest_common.sh@10 -- # set +x 00:18:15.308 23:03:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.308 23:03:47 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:15.308 23:03:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:15.308 23:03:47 -- common/autotest_common.sh@10 -- # set +x 00:18:15.308 23:03:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:15.308 23:03:47 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:15.308 23:03:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:15.308 23:03:47 -- common/autotest_common.sh@10 -- # set +x 00:18:15.308 23:03:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:15.308 23:03:47 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:15.308 23:03:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:15.308 23:03:47 -- common/autotest_common.sh@10 -- # set +x 00:18:15.308 [2024-07-24 23:03:47.531557] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.308 23:03:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:15.308 23:03:47 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:15.308 23:03:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:15.308 23:03:47 -- common/autotest_common.sh@10 -- # set +x 00:18:15.308 Malloc0 00:18:15.308 23:03:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:15.308 23:03:47 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:15.308 23:03:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:15.308 23:03:47 -- common/autotest_common.sh@10 -- # set +x 00:18:15.308 23:03:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:15.308 23:03:47 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:15.308 23:03:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:15.308 23:03:47 -- common/autotest_common.sh@10 -- # set +x 00:18:15.308 23:03:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:15.308 23:03:47 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:15.308 23:03:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:15.308 23:03:47 -- common/autotest_common.sh@10 -- # set +x 00:18:15.308 [2024-07-24 23:03:47.595713] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.308 23:03:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:15.308 23:03:47 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3206033 00:18:15.308 23:03:47 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:15.308 23:03:47 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:15.308 23:03:47 -- target/bdev_io_wait.sh@30 -- # READ_PID=3206035 00:18:15.308 23:03:47 -- nvmf/common.sh@520 -- # config=() 00:18:15.308 23:03:47 -- nvmf/common.sh@520 -- # local subsystem config 00:18:15.308 23:03:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:15.308 23:03:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:15.308 { 00:18:15.308 "params": { 00:18:15.308 "name": "Nvme$subsystem", 00:18:15.308 "trtype": "$TEST_TRANSPORT", 00:18:15.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.308 "adrfam": "ipv4", 00:18:15.308 "trsvcid": "$NVMF_PORT", 00:18:15.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.308 "hdgst": ${hdgst:-false}, 00:18:15.308 "ddgst": ${ddgst:-false} 00:18:15.308 }, 00:18:15.308 "method": "bdev_nvme_attach_controller" 00:18:15.308 } 00:18:15.308 EOF 00:18:15.308 )") 00:18:15.309 23:03:47 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:15.309 23:03:47 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:15.309 23:03:47 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3206037 00:18:15.309 23:03:47 -- nvmf/common.sh@520 -- # config=() 00:18:15.309 23:03:47 -- nvmf/common.sh@520 -- # local subsystem config 00:18:15.309 23:03:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:15.309 23:03:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:15.309 { 00:18:15.309 "params": { 00:18:15.309 "name": "Nvme$subsystem", 00:18:15.309 "trtype": "$TEST_TRANSPORT", 00:18:15.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.309 "adrfam": "ipv4", 00:18:15.309 "trsvcid": "$NVMF_PORT", 00:18:15.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.309 "hdgst": ${hdgst:-false}, 00:18:15.309 "ddgst": ${ddgst:-false} 00:18:15.309 }, 00:18:15.309 "method": "bdev_nvme_attach_controller" 00:18:15.309 } 00:18:15.309 EOF 00:18:15.309 )") 00:18:15.309 23:03:47 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:15.309 23:03:47 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:15.309 23:03:47 -- nvmf/common.sh@542 -- # cat 00:18:15.309 23:03:47 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3206040 00:18:15.309 23:03:47 -- target/bdev_io_wait.sh@35 -- # sync 00:18:15.309 23:03:47 -- nvmf/common.sh@520 -- # config=() 00:18:15.309 23:03:47 -- nvmf/common.sh@520 -- # local subsystem config 00:18:15.309 23:03:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:15.309 23:03:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:15.309 { 00:18:15.309 "params": { 00:18:15.309 "name": "Nvme$subsystem", 00:18:15.309 "trtype": "$TEST_TRANSPORT", 00:18:15.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.309 "adrfam": "ipv4", 00:18:15.309 "trsvcid": "$NVMF_PORT", 00:18:15.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.309 "hdgst": ${hdgst:-false}, 00:18:15.309 "ddgst": ${ddgst:-false} 00:18:15.309 }, 00:18:15.309 "method": "bdev_nvme_attach_controller" 00:18:15.309 } 00:18:15.309 EOF 00:18:15.309 )") 00:18:15.309 23:03:47 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:15.309 23:03:47 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:15.309 23:03:47 -- nvmf/common.sh@542 -- # cat 00:18:15.309 23:03:47 -- nvmf/common.sh@520 -- # config=() 00:18:15.309 23:03:47 -- nvmf/common.sh@520 -- # local subsystem config 00:18:15.309 23:03:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:15.309 23:03:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:15.309 { 00:18:15.309 "params": { 00:18:15.309 "name": "Nvme$subsystem", 00:18:15.309 "trtype": "$TEST_TRANSPORT", 00:18:15.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.309 "adrfam": "ipv4", 00:18:15.309 "trsvcid": "$NVMF_PORT", 00:18:15.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.309 "hdgst": ${hdgst:-false}, 00:18:15.309 "ddgst": ${ddgst:-false} 00:18:15.309 }, 00:18:15.309 "method": "bdev_nvme_attach_controller" 00:18:15.309 } 00:18:15.309 EOF 00:18:15.309 )") 00:18:15.309 23:03:47 -- nvmf/common.sh@542 -- # cat 00:18:15.309 23:03:47 -- target/bdev_io_wait.sh@37 -- # wait 3206033 00:18:15.309 23:03:47 -- nvmf/common.sh@542 -- # cat 00:18:15.309 23:03:47 -- nvmf/common.sh@544 -- # jq . 00:18:15.309 23:03:47 -- nvmf/common.sh@544 -- # jq . 00:18:15.309 23:03:47 -- nvmf/common.sh@545 -- # IFS=, 00:18:15.309 23:03:47 -- nvmf/common.sh@544 -- # jq . 00:18:15.309 23:03:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:15.309 "params": { 00:18:15.309 "name": "Nvme1", 00:18:15.309 "trtype": "tcp", 00:18:15.309 "traddr": "10.0.0.2", 00:18:15.309 "adrfam": "ipv4", 00:18:15.309 "trsvcid": "4420", 00:18:15.309 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.309 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:15.309 "hdgst": false, 00:18:15.309 "ddgst": false 00:18:15.309 }, 00:18:15.309 "method": "bdev_nvme_attach_controller" 00:18:15.309 }' 00:18:15.309 23:03:47 -- nvmf/common.sh@544 -- # jq . 00:18:15.309 23:03:47 -- nvmf/common.sh@545 -- # IFS=, 00:18:15.309 23:03:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:15.309 "params": { 00:18:15.309 "name": "Nvme1", 00:18:15.309 "trtype": "tcp", 00:18:15.309 "traddr": "10.0.0.2", 00:18:15.309 "adrfam": "ipv4", 00:18:15.309 "trsvcid": "4420", 00:18:15.309 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.309 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:15.309 "hdgst": false, 00:18:15.309 "ddgst": false 00:18:15.309 }, 00:18:15.309 "method": "bdev_nvme_attach_controller" 00:18:15.309 }' 00:18:15.309 23:03:47 -- nvmf/common.sh@545 -- # IFS=, 00:18:15.309 23:03:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:15.309 "params": { 00:18:15.309 "name": "Nvme1", 00:18:15.309 "trtype": "tcp", 00:18:15.309 "traddr": "10.0.0.2", 00:18:15.309 "adrfam": "ipv4", 00:18:15.309 "trsvcid": "4420", 00:18:15.309 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.309 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:15.309 "hdgst": false, 00:18:15.309 "ddgst": false 00:18:15.309 }, 00:18:15.309 "method": "bdev_nvme_attach_controller" 00:18:15.309 }' 00:18:15.309 23:03:47 -- nvmf/common.sh@545 -- # IFS=, 00:18:15.309 23:03:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:15.309 "params": { 00:18:15.309 "name": "Nvme1", 00:18:15.309 "trtype": "tcp", 00:18:15.309 "traddr": "10.0.0.2", 00:18:15.309 "adrfam": "ipv4", 00:18:15.309 "trsvcid": "4420", 00:18:15.309 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.309 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:15.309 "hdgst": false, 00:18:15.309 "ddgst": false 00:18:15.309 }, 00:18:15.310 "method": "bdev_nvme_attach_controller" 00:18:15.310 }' 00:18:15.310 [2024-07-24 23:03:47.648493] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:15.310 [2024-07-24 23:03:47.648495] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:15.310 [2024-07-24 23:03:47.648549] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 23:03:47.648550] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:15.310 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:15.310 [2024-07-24 23:03:47.648699] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:15.310 [2024-07-24 23:03:47.648755] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:15.310 [2024-07-24 23:03:47.652327] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:15.310 [2024-07-24 23:03:47.652377] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:15.310 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.568 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.568 [2024-07-24 23:03:47.838455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.568 [2024-07-24 23:03:47.861929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:15.568 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.568 [2024-07-24 23:03:47.930807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.568 [2024-07-24 23:03:47.954100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:15.568 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.827 [2024-07-24 23:03:48.027563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.827 [2024-07-24 23:03:48.055042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:15.827 [2024-07-24 23:03:48.083650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.827 [2024-07-24 23:03:48.107239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:15.827 Running I/O for 1 seconds... 00:18:15.827 Running I/O for 1 seconds... 00:18:16.086 Running I/O for 1 seconds... 00:18:16.086 Running I/O for 1 seconds... 00:18:17.022 00:18:17.023 Latency(us) 00:18:17.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.023 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:17.023 Nvme1n1 : 1.01 8539.28 33.36 0.00 0.00 14874.11 5059.38 24117.25 00:18:17.023 =================================================================================================================== 00:18:17.023 Total : 8539.28 33.36 0.00 0.00 14874.11 5059.38 24117.25 00:18:17.023 00:18:17.023 Latency(us) 00:18:17.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.023 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:17.023 Nvme1n1 : 1.01 11474.10 44.82 0.00 0.00 11119.74 5714.74 25060.97 00:18:17.023 =================================================================================================================== 00:18:17.023 Total : 11474.10 44.82 0.00 0.00 11119.74 5714.74 25060.97 00:18:17.023 00:18:17.023 Latency(us) 00:18:17.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.023 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:17.023 Nvme1n1 : 1.00 9094.21 35.52 0.00 0.00 14045.07 3748.66 36280.73 00:18:17.023 =================================================================================================================== 00:18:17.023 Total : 9094.21 35.52 0.00 0.00 14045.07 3748.66 36280.73 00:18:17.023 00:18:17.023 Latency(us) 00:18:17.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.023 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:17.023 Nvme1n1 : 1.00 263259.55 1028.36 0.00 0.00 484.89 201.52 606.21 00:18:17.023 =================================================================================================================== 00:18:17.023 Total : 263259.55 1028.36 0.00 0.00 484.89 201.52 606.21 00:18:17.281 23:03:49 -- target/bdev_io_wait.sh@38 -- # wait 3206035 00:18:17.281 23:03:49 -- target/bdev_io_wait.sh@39 -- # wait 3206037 00:18:17.281 23:03:49 -- target/bdev_io_wait.sh@40 -- # wait 3206040 00:18:17.281 23:03:49 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:17.281 23:03:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:17.281 23:03:49 -- common/autotest_common.sh@10 -- # set +x 00:18:17.281 23:03:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:17.281 23:03:49 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:17.281 23:03:49 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:17.281 23:03:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:17.281 23:03:49 -- nvmf/common.sh@116 -- # sync 00:18:17.281 23:03:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:17.281 23:03:49 -- nvmf/common.sh@119 -- # set +e 00:18:17.281 23:03:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:17.281 23:03:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:17.281 rmmod nvme_tcp 00:18:17.281 rmmod nvme_fabrics 00:18:17.281 rmmod nvme_keyring 00:18:17.281 23:03:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:17.281 23:03:49 -- nvmf/common.sh@123 -- # set -e 00:18:17.281 23:03:49 -- nvmf/common.sh@124 -- # return 0 00:18:17.281 23:03:49 -- nvmf/common.sh@477 -- # '[' -n 3205749 ']' 00:18:17.281 23:03:49 -- nvmf/common.sh@478 -- # killprocess 3205749 00:18:17.281 23:03:49 -- common/autotest_common.sh@926 -- # '[' -z 3205749 ']' 00:18:17.281 23:03:49 -- common/autotest_common.sh@930 -- # kill -0 3205749 00:18:17.281 23:03:49 -- common/autotest_common.sh@931 -- # uname 00:18:17.281 23:03:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:17.281 23:03:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3205749 00:18:17.540 23:03:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:17.540 23:03:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:17.540 23:03:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3205749' 00:18:17.540 killing process with pid 3205749 00:18:17.540 23:03:49 -- common/autotest_common.sh@945 -- # kill 3205749 00:18:17.540 23:03:49 -- common/autotest_common.sh@950 -- # wait 3205749 00:18:17.540 23:03:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:17.540 23:03:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:17.540 23:03:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:17.540 23:03:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.540 23:03:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:17.540 23:03:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.540 23:03:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.540 23:03:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.073 23:03:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:20.073 00:18:20.073 real 0m12.299s 00:18:20.073 user 0m19.511s 00:18:20.073 sys 0m7.093s 00:18:20.073 23:03:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:20.073 23:03:51 -- common/autotest_common.sh@10 -- # set +x 00:18:20.073 ************************************ 00:18:20.073 END TEST nvmf_bdev_io_wait 00:18:20.073 ************************************ 00:18:20.073 23:03:52 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:20.073 23:03:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:20.073 23:03:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:20.073 23:03:52 -- common/autotest_common.sh@10 -- # set +x 00:18:20.073 ************************************ 00:18:20.073 START TEST nvmf_queue_depth 00:18:20.073 ************************************ 00:18:20.073 23:03:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:20.074 * Looking for test storage... 00:18:20.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:20.074 23:03:52 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:20.074 23:03:52 -- nvmf/common.sh@7 -- # uname -s 00:18:20.074 23:03:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.074 23:03:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.074 23:03:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.074 23:03:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.074 23:03:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.074 23:03:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.074 23:03:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.074 23:03:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.074 23:03:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.074 23:03:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.074 23:03:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:20.074 23:03:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:20.074 23:03:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.074 23:03:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.074 23:03:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:20.074 23:03:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:20.074 23:03:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.074 23:03:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.074 23:03:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.074 23:03:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.074 23:03:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.074 23:03:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.074 23:03:52 -- paths/export.sh@5 -- # export PATH 00:18:20.074 23:03:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.074 23:03:52 -- nvmf/common.sh@46 -- # : 0 00:18:20.074 23:03:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:20.074 23:03:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:20.074 23:03:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:20.074 23:03:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.074 23:03:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.074 23:03:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:20.074 23:03:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:20.074 23:03:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:20.074 23:03:52 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:20.074 23:03:52 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:20.074 23:03:52 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:20.074 23:03:52 -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:20.074 23:03:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:20.074 23:03:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:20.074 23:03:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:20.074 23:03:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:20.074 23:03:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:20.074 23:03:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.074 23:03:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.074 23:03:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.074 23:03:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:20.074 23:03:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:20.074 23:03:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:20.074 23:03:52 -- common/autotest_common.sh@10 -- # set +x 00:18:26.655 23:03:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:26.655 23:03:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:26.655 23:03:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:26.655 23:03:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:26.655 23:03:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:26.655 23:03:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:26.655 23:03:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:26.655 23:03:58 -- nvmf/common.sh@294 -- # net_devs=() 00:18:26.655 23:03:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:26.655 23:03:58 -- nvmf/common.sh@295 -- # e810=() 00:18:26.655 23:03:58 -- nvmf/common.sh@295 -- # local -ga e810 00:18:26.655 23:03:58 -- nvmf/common.sh@296 -- # x722=() 00:18:26.655 23:03:58 -- nvmf/common.sh@296 -- # local -ga x722 00:18:26.655 23:03:58 -- nvmf/common.sh@297 -- # mlx=() 00:18:26.655 23:03:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:26.655 23:03:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:26.655 23:03:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:26.655 23:03:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:26.655 23:03:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:26.655 23:03:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:26.655 23:03:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:26.655 23:03:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:26.655 23:03:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:26.655 23:03:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:26.656 23:03:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:26.656 23:03:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:26.656 23:03:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:26.656 23:03:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:26.656 23:03:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:26.656 23:03:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:26.656 23:03:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:26.656 23:03:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:26.656 23:03:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:26.656 23:03:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:26.656 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:26.656 23:03:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:26.656 23:03:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:26.656 23:03:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.656 23:03:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.656 23:03:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:26.656 23:03:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:26.656 23:03:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:26.656 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:26.656 23:03:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:26.656 23:03:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:26.656 23:03:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.656 23:03:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.656 23:03:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:26.656 23:03:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:26.656 23:03:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:26.656 23:03:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:26.656 23:03:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:26.656 23:03:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.656 23:03:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:26.656 23:03:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.656 23:03:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:26.656 Found net devices under 0000:af:00.0: cvl_0_0 00:18:26.656 23:03:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.656 23:03:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:26.656 23:03:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.656 23:03:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:26.656 23:03:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.656 23:03:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:26.656 Found net devices under 0000:af:00.1: cvl_0_1 00:18:26.656 23:03:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.656 23:03:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:26.656 23:03:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:26.656 23:03:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:26.656 23:03:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:26.656 23:03:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:26.656 23:03:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:26.656 23:03:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:26.656 23:03:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:26.656 23:03:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:26.656 23:03:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:26.656 23:03:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:26.656 23:03:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:26.656 23:03:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:26.656 23:03:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:26.656 23:03:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:26.656 23:03:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:26.656 23:03:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:26.656 23:03:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:26.656 23:03:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:26.656 23:03:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:26.656 23:03:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:26.656 23:03:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:26.656 23:03:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:26.656 23:03:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:26.656 23:03:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:26.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:26.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:18:26.656 00:18:26.656 --- 10.0.0.2 ping statistics --- 00:18:26.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.656 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:18:26.656 23:03:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:26.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:26.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:18:26.656 00:18:26.656 --- 10.0.0.1 ping statistics --- 00:18:26.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.656 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:18:26.656 23:03:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:26.656 23:03:58 -- nvmf/common.sh@410 -- # return 0 00:18:26.656 23:03:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:26.656 23:03:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:26.656 23:03:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:26.656 23:03:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:26.656 23:03:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:26.656 23:03:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:26.656 23:03:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:26.656 23:03:58 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:26.656 23:03:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:26.656 23:03:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:26.656 23:03:58 -- common/autotest_common.sh@10 -- # set +x 00:18:26.656 23:03:58 -- nvmf/common.sh@469 -- # nvmfpid=3210010 00:18:26.656 23:03:58 -- nvmf/common.sh@470 -- # waitforlisten 3210010 00:18:26.656 23:03:58 -- common/autotest_common.sh@819 -- # '[' -z 3210010 ']' 00:18:26.656 23:03:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.656 23:03:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:26.656 23:03:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.656 23:03:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:26.656 23:03:58 -- common/autotest_common.sh@10 -- # set +x 00:18:26.656 23:03:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:26.656 [2024-07-24 23:03:58.521703] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:26.656 [2024-07-24 23:03:58.521764] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.656 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.656 [2024-07-24 23:03:58.596259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.656 [2024-07-24 23:03:58.634016] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:26.656 [2024-07-24 23:03:58.634119] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.656 [2024-07-24 23:03:58.634129] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.656 [2024-07-24 23:03:58.634138] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.656 [2024-07-24 23:03:58.634161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.916 23:03:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:26.916 23:03:59 -- common/autotest_common.sh@852 -- # return 0 00:18:26.916 23:03:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:26.916 23:03:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:26.916 23:03:59 -- common/autotest_common.sh@10 -- # set +x 00:18:26.916 23:03:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.916 23:03:59 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:26.916 23:03:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:26.916 23:03:59 -- common/autotest_common.sh@10 -- # set +x 00:18:26.916 [2024-07-24 23:03:59.335844] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.916 23:03:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:26.916 23:03:59 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:26.916 23:03:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:26.916 23:03:59 -- common/autotest_common.sh@10 -- # set +x 00:18:27.176 Malloc0 00:18:27.176 23:03:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:27.176 23:03:59 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:27.176 23:03:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:27.176 23:03:59 -- common/autotest_common.sh@10 -- # set +x 00:18:27.176 23:03:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:27.176 23:03:59 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:27.176 23:03:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:27.176 23:03:59 -- common/autotest_common.sh@10 -- # set +x 00:18:27.176 23:03:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:27.176 23:03:59 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:27.176 23:03:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:27.176 23:03:59 -- common/autotest_common.sh@10 -- # set +x 00:18:27.176 [2024-07-24 23:03:59.387738] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.176 23:03:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:27.176 23:03:59 -- target/queue_depth.sh@30 -- # bdevperf_pid=3210058 00:18:27.176 23:03:59 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:27.176 23:03:59 -- target/queue_depth.sh@33 -- # waitforlisten 3210058 /var/tmp/bdevperf.sock 00:18:27.176 23:03:59 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:27.176 23:03:59 -- common/autotest_common.sh@819 -- # '[' -z 3210058 ']' 00:18:27.176 23:03:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.176 23:03:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:27.176 23:03:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.176 23:03:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:27.176 23:03:59 -- common/autotest_common.sh@10 -- # set +x 00:18:27.176 [2024-07-24 23:03:59.423064] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:27.176 [2024-07-24 23:03:59.423109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210058 ] 00:18:27.176 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.176 [2024-07-24 23:03:59.494576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.176 [2024-07-24 23:03:59.531616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.114 23:04:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:28.114 23:04:00 -- common/autotest_common.sh@852 -- # return 0 00:18:28.114 23:04:00 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:28.114 23:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.114 23:04:00 -- common/autotest_common.sh@10 -- # set +x 00:18:28.114 NVMe0n1 00:18:28.114 23:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.114 23:04:00 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:28.114 Running I/O for 10 seconds... 00:18:40.327 00:18:40.327 Latency(us) 00:18:40.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.327 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:40.327 Verification LBA range: start 0x0 length 0x4000 00:18:40.327 NVMe0n1 : 10.05 19393.69 75.76 0.00 0.00 52656.71 9699.33 41313.89 00:18:40.327 =================================================================================================================== 00:18:40.327 Total : 19393.69 75.76 0.00 0.00 52656.71 9699.33 41313.89 00:18:40.327 0 00:18:40.327 23:04:10 -- target/queue_depth.sh@39 -- # killprocess 3210058 00:18:40.327 23:04:10 -- common/autotest_common.sh@926 -- # '[' -z 3210058 ']' 00:18:40.327 23:04:10 -- common/autotest_common.sh@930 -- # kill -0 3210058 00:18:40.327 23:04:10 -- common/autotest_common.sh@931 -- # uname 00:18:40.327 23:04:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:40.327 23:04:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3210058 00:18:40.327 23:04:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:40.327 23:04:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:40.327 23:04:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3210058' 00:18:40.327 killing process with pid 3210058 00:18:40.327 23:04:10 -- common/autotest_common.sh@945 -- # kill 3210058 00:18:40.327 Received shutdown signal, test time was about 10.000000 seconds 00:18:40.327 00:18:40.327 Latency(us) 00:18:40.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.327 =================================================================================================================== 00:18:40.327 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.327 23:04:10 -- common/autotest_common.sh@950 -- # wait 3210058 00:18:40.327 23:04:10 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:40.327 23:04:10 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:40.327 23:04:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:40.327 23:04:10 -- nvmf/common.sh@116 -- # sync 00:18:40.327 23:04:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:40.327 23:04:10 -- nvmf/common.sh@119 -- # set +e 00:18:40.327 23:04:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:40.327 23:04:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:40.327 rmmod nvme_tcp 00:18:40.327 rmmod nvme_fabrics 00:18:40.327 rmmod nvme_keyring 00:18:40.327 23:04:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:40.328 23:04:10 -- nvmf/common.sh@123 -- # set -e 00:18:40.328 23:04:10 -- nvmf/common.sh@124 -- # return 0 00:18:40.328 23:04:10 -- nvmf/common.sh@477 -- # '[' -n 3210010 ']' 00:18:40.328 23:04:10 -- nvmf/common.sh@478 -- # killprocess 3210010 00:18:40.328 23:04:10 -- common/autotest_common.sh@926 -- # '[' -z 3210010 ']' 00:18:40.328 23:04:10 -- common/autotest_common.sh@930 -- # kill -0 3210010 00:18:40.328 23:04:10 -- common/autotest_common.sh@931 -- # uname 00:18:40.328 23:04:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:40.328 23:04:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3210010 00:18:40.328 23:04:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:40.328 23:04:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:40.328 23:04:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3210010' 00:18:40.328 killing process with pid 3210010 00:18:40.328 23:04:10 -- common/autotest_common.sh@945 -- # kill 3210010 00:18:40.328 23:04:10 -- common/autotest_common.sh@950 -- # wait 3210010 00:18:40.328 23:04:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:40.328 23:04:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:40.328 23:04:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:40.328 23:04:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:40.328 23:04:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:40.328 23:04:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.328 23:04:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.328 23:04:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.953 23:04:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:40.953 00:18:40.953 real 0m21.171s 00:18:40.953 user 0m24.453s 00:18:40.953 sys 0m6.910s 00:18:40.953 23:04:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:40.953 23:04:13 -- common/autotest_common.sh@10 -- # set +x 00:18:40.953 ************************************ 00:18:40.953 END TEST nvmf_queue_depth 00:18:40.953 ************************************ 00:18:40.953 23:04:13 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:40.953 23:04:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:40.953 23:04:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:40.953 23:04:13 -- common/autotest_common.sh@10 -- # set +x 00:18:40.953 ************************************ 00:18:40.953 START TEST nvmf_multipath 00:18:40.953 ************************************ 00:18:40.953 23:04:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:40.953 * Looking for test storage... 00:18:40.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:40.953 23:04:13 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:40.953 23:04:13 -- nvmf/common.sh@7 -- # uname -s 00:18:40.953 23:04:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.953 23:04:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.953 23:04:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.953 23:04:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.953 23:04:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.953 23:04:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.953 23:04:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.953 23:04:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.953 23:04:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.953 23:04:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.953 23:04:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:40.953 23:04:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:40.953 23:04:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.953 23:04:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.953 23:04:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:40.953 23:04:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:40.953 23:04:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.953 23:04:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.953 23:04:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.953 23:04:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.953 23:04:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.953 23:04:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.953 23:04:13 -- paths/export.sh@5 -- # export PATH 00:18:40.953 23:04:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.953 23:04:13 -- nvmf/common.sh@46 -- # : 0 00:18:40.953 23:04:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:40.953 23:04:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:40.953 23:04:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:40.953 23:04:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.953 23:04:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.953 23:04:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:40.953 23:04:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:40.953 23:04:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:40.953 23:04:13 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:40.953 23:04:13 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:41.211 23:04:13 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:41.211 23:04:13 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:41.211 23:04:13 -- target/multipath.sh@43 -- # nvmftestinit 00:18:41.211 23:04:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:41.211 23:04:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.212 23:04:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:41.212 23:04:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:41.212 23:04:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:41.212 23:04:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.212 23:04:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:41.212 23:04:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.212 23:04:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:41.212 23:04:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:41.212 23:04:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:41.212 23:04:13 -- common/autotest_common.sh@10 -- # set +x 00:18:47.783 23:04:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:47.783 23:04:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:47.783 23:04:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:47.783 23:04:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:47.783 23:04:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:47.783 23:04:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:47.783 23:04:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:47.783 23:04:20 -- nvmf/common.sh@294 -- # net_devs=() 00:18:47.783 23:04:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:47.783 23:04:20 -- nvmf/common.sh@295 -- # e810=() 00:18:47.783 23:04:20 -- nvmf/common.sh@295 -- # local -ga e810 00:18:47.783 23:04:20 -- nvmf/common.sh@296 -- # x722=() 00:18:47.783 23:04:20 -- nvmf/common.sh@296 -- # local -ga x722 00:18:47.783 23:04:20 -- nvmf/common.sh@297 -- # mlx=() 00:18:47.783 23:04:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:47.783 23:04:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.783 23:04:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.783 23:04:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.783 23:04:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.783 23:04:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.783 23:04:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.783 23:04:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.783 23:04:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.783 23:04:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.783 23:04:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.783 23:04:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.783 23:04:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:47.784 23:04:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:47.784 23:04:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:47.784 23:04:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:47.784 23:04:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:47.784 23:04:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:47.784 23:04:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:47.784 23:04:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:47.784 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:47.784 23:04:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:47.784 23:04:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:47.784 23:04:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.784 23:04:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.784 23:04:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:47.784 23:04:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:47.784 23:04:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:47.784 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:47.784 23:04:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:47.784 23:04:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:47.784 23:04:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.784 23:04:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.784 23:04:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:47.784 23:04:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:47.784 23:04:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:47.784 23:04:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:47.784 23:04:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:47.784 23:04:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.784 23:04:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:47.784 23:04:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.784 23:04:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:47.784 Found net devices under 0000:af:00.0: cvl_0_0 00:18:47.784 23:04:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.784 23:04:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:47.784 23:04:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.784 23:04:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:47.784 23:04:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.784 23:04:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:47.784 Found net devices under 0000:af:00.1: cvl_0_1 00:18:47.784 23:04:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.784 23:04:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:47.784 23:04:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:47.784 23:04:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:47.784 23:04:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:47.784 23:04:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:47.784 23:04:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.784 23:04:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.784 23:04:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:47.784 23:04:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:47.784 23:04:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:47.784 23:04:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:47.784 23:04:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:47.784 23:04:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:47.784 23:04:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.784 23:04:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:47.784 23:04:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:47.784 23:04:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:47.784 23:04:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:47.784 23:04:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:47.784 23:04:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:47.784 23:04:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:47.784 23:04:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:48.044 23:04:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:48.044 23:04:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:48.044 23:04:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:48.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:18:48.044 00:18:48.044 --- 10.0.0.2 ping statistics --- 00:18:48.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.044 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:18:48.044 23:04:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:48.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:18:48.044 00:18:48.044 --- 10.0.0.1 ping statistics --- 00:18:48.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.044 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:18:48.044 23:04:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.044 23:04:20 -- nvmf/common.sh@410 -- # return 0 00:18:48.044 23:04:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:48.044 23:04:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.044 23:04:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:48.044 23:04:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:48.044 23:04:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.044 23:04:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:48.044 23:04:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:48.044 23:04:20 -- target/multipath.sh@45 -- # '[' -z ']' 00:18:48.044 23:04:20 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:48.044 only one NIC for nvmf test 00:18:48.044 23:04:20 -- target/multipath.sh@47 -- # nvmftestfini 00:18:48.044 23:04:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:48.044 23:04:20 -- nvmf/common.sh@116 -- # sync 00:18:48.044 23:04:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:48.044 23:04:20 -- nvmf/common.sh@119 -- # set +e 00:18:48.044 23:04:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:48.044 23:04:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:48.044 rmmod nvme_tcp 00:18:48.044 rmmod nvme_fabrics 00:18:48.044 rmmod nvme_keyring 00:18:48.044 23:04:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:48.044 23:04:20 -- nvmf/common.sh@123 -- # set -e 00:18:48.044 23:04:20 -- nvmf/common.sh@124 -- # return 0 00:18:48.044 23:04:20 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:48.044 23:04:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:48.044 23:04:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:48.044 23:04:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:48.044 23:04:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:48.044 23:04:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:48.044 23:04:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.044 23:04:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.044 23:04:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.578 23:04:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:50.578 23:04:22 -- target/multipath.sh@48 -- # exit 0 00:18:50.578 23:04:22 -- target/multipath.sh@1 -- # nvmftestfini 00:18:50.578 23:04:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:50.578 23:04:22 -- nvmf/common.sh@116 -- # sync 00:18:50.578 23:04:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:50.578 23:04:22 -- nvmf/common.sh@119 -- # set +e 00:18:50.578 23:04:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:50.578 23:04:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:50.578 23:04:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:50.578 23:04:22 -- nvmf/common.sh@123 -- # set -e 00:18:50.578 23:04:22 -- nvmf/common.sh@124 -- # return 0 00:18:50.578 23:04:22 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:50.578 23:04:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:50.578 23:04:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:50.578 23:04:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:50.578 23:04:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:50.578 23:04:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:50.578 23:04:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.578 23:04:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.578 23:04:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.578 23:04:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:50.578 00:18:50.578 real 0m9.298s 00:18:50.578 user 0m1.946s 00:18:50.578 sys 0m5.367s 00:18:50.578 23:04:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:50.578 23:04:22 -- common/autotest_common.sh@10 -- # set +x 00:18:50.578 ************************************ 00:18:50.578 END TEST nvmf_multipath 00:18:50.578 ************************************ 00:18:50.578 23:04:22 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:50.578 23:04:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:50.578 23:04:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:50.578 23:04:22 -- common/autotest_common.sh@10 -- # set +x 00:18:50.578 ************************************ 00:18:50.578 START TEST nvmf_zcopy 00:18:50.578 ************************************ 00:18:50.578 23:04:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:50.578 * Looking for test storage... 00:18:50.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:50.578 23:04:22 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:50.578 23:04:22 -- nvmf/common.sh@7 -- # uname -s 00:18:50.578 23:04:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.578 23:04:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.578 23:04:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.578 23:04:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.578 23:04:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.578 23:04:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.578 23:04:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.578 23:04:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.578 23:04:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.578 23:04:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.578 23:04:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:50.578 23:04:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:50.578 23:04:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.578 23:04:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.578 23:04:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:50.578 23:04:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:50.578 23:04:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.578 23:04:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.578 23:04:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.578 23:04:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.578 23:04:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.578 23:04:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.578 23:04:22 -- paths/export.sh@5 -- # export PATH 00:18:50.578 23:04:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.578 23:04:22 -- nvmf/common.sh@46 -- # : 0 00:18:50.578 23:04:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:50.578 23:04:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:50.578 23:04:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:50.578 23:04:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.579 23:04:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.579 23:04:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:50.579 23:04:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:50.579 23:04:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:50.579 23:04:22 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:50.579 23:04:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:50.579 23:04:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:50.579 23:04:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:50.579 23:04:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:50.579 23:04:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:50.579 23:04:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.579 23:04:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.579 23:04:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.579 23:04:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:50.579 23:04:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:50.579 23:04:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:50.579 23:04:22 -- common/autotest_common.sh@10 -- # set +x 00:18:57.154 23:04:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:57.154 23:04:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:57.154 23:04:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:57.154 23:04:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:57.154 23:04:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:57.154 23:04:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:57.154 23:04:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:57.154 23:04:29 -- nvmf/common.sh@294 -- # net_devs=() 00:18:57.154 23:04:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:57.154 23:04:29 -- nvmf/common.sh@295 -- # e810=() 00:18:57.154 23:04:29 -- nvmf/common.sh@295 -- # local -ga e810 00:18:57.154 23:04:29 -- nvmf/common.sh@296 -- # x722=() 00:18:57.154 23:04:29 -- nvmf/common.sh@296 -- # local -ga x722 00:18:57.154 23:04:29 -- nvmf/common.sh@297 -- # mlx=() 00:18:57.154 23:04:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:57.154 23:04:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.154 23:04:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.154 23:04:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.154 23:04:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.154 23:04:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.154 23:04:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.154 23:04:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.154 23:04:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.154 23:04:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.154 23:04:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.154 23:04:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.154 23:04:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:57.154 23:04:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:57.154 23:04:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:57.154 23:04:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:57.154 23:04:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:57.154 23:04:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:57.154 23:04:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:57.154 23:04:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:57.154 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:57.154 23:04:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:57.154 23:04:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:57.154 23:04:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.154 23:04:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.154 23:04:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:57.154 23:04:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:57.154 23:04:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:57.154 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:57.154 23:04:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:57.154 23:04:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:57.154 23:04:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.154 23:04:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.154 23:04:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:57.154 23:04:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:57.154 23:04:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:57.155 23:04:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:57.155 23:04:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:57.155 23:04:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.155 23:04:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:57.155 23:04:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.155 23:04:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:57.155 Found net devices under 0000:af:00.0: cvl_0_0 00:18:57.155 23:04:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.155 23:04:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:57.155 23:04:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.155 23:04:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:57.155 23:04:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.155 23:04:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:57.155 Found net devices under 0000:af:00.1: cvl_0_1 00:18:57.155 23:04:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.155 23:04:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:57.155 23:04:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:57.155 23:04:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:57.155 23:04:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:57.155 23:04:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:57.155 23:04:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.155 23:04:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.155 23:04:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:57.155 23:04:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:57.155 23:04:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:57.155 23:04:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:57.155 23:04:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:57.155 23:04:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:57.155 23:04:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.155 23:04:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:57.155 23:04:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:57.155 23:04:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:57.155 23:04:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:57.155 23:04:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:57.155 23:04:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:57.155 23:04:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:57.155 23:04:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:57.155 23:04:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:57.155 23:04:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:57.155 23:04:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:57.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:18:57.155 00:18:57.155 --- 10.0.0.2 ping statistics --- 00:18:57.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.155 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:18:57.155 23:04:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:57.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:18:57.155 00:18:57.155 --- 10.0.0.1 ping statistics --- 00:18:57.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.155 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:18:57.155 23:04:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.155 23:04:29 -- nvmf/common.sh@410 -- # return 0 00:18:57.155 23:04:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:57.155 23:04:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.155 23:04:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:57.155 23:04:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:57.155 23:04:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.155 23:04:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:57.155 23:04:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:57.155 23:04:29 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:57.155 23:04:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:57.155 23:04:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:57.155 23:04:29 -- common/autotest_common.sh@10 -- # set +x 00:18:57.155 23:04:29 -- nvmf/common.sh@469 -- # nvmfpid=3219357 00:18:57.155 23:04:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:57.155 23:04:29 -- nvmf/common.sh@470 -- # waitforlisten 3219357 00:18:57.155 23:04:29 -- common/autotest_common.sh@819 -- # '[' -z 3219357 ']' 00:18:57.155 23:04:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.155 23:04:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:57.155 23:04:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.155 23:04:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:57.155 23:04:29 -- common/autotest_common.sh@10 -- # set +x 00:18:57.414 [2024-07-24 23:04:29.628732] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:57.414 [2024-07-24 23:04:29.628793] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.414 EAL: No free 2048 kB hugepages reported on node 1 00:18:57.414 [2024-07-24 23:04:29.705734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.414 [2024-07-24 23:04:29.742728] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:57.414 [2024-07-24 23:04:29.742834] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.414 [2024-07-24 23:04:29.742843] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.414 [2024-07-24 23:04:29.742852] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.414 [2024-07-24 23:04:29.742871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.352 23:04:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:58.352 23:04:30 -- common/autotest_common.sh@852 -- # return 0 00:18:58.352 23:04:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:58.352 23:04:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:58.352 23:04:30 -- common/autotest_common.sh@10 -- # set +x 00:18:58.352 23:04:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.352 23:04:30 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:58.352 23:04:30 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:58.352 23:04:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.352 23:04:30 -- common/autotest_common.sh@10 -- # set +x 00:18:58.352 [2024-07-24 23:04:30.463815] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.352 23:04:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.352 23:04:30 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:58.352 23:04:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.352 23:04:30 -- common/autotest_common.sh@10 -- # set +x 00:18:58.352 23:04:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.352 23:04:30 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:58.352 23:04:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.352 23:04:30 -- common/autotest_common.sh@10 -- # set +x 00:18:58.352 [2024-07-24 23:04:30.479956] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.352 23:04:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.352 23:04:30 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:58.352 23:04:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.352 23:04:30 -- common/autotest_common.sh@10 -- # set +x 00:18:58.352 23:04:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.352 23:04:30 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:58.352 23:04:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.352 23:04:30 -- common/autotest_common.sh@10 -- # set +x 00:18:58.352 malloc0 00:18:58.352 23:04:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.352 23:04:30 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:58.352 23:04:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.352 23:04:30 -- common/autotest_common.sh@10 -- # set +x 00:18:58.352 23:04:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.352 23:04:30 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:58.352 23:04:30 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:58.352 23:04:30 -- nvmf/common.sh@520 -- # config=() 00:18:58.352 23:04:30 -- nvmf/common.sh@520 -- # local subsystem config 00:18:58.352 23:04:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:58.352 23:04:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:58.352 { 00:18:58.352 "params": { 00:18:58.352 "name": "Nvme$subsystem", 00:18:58.353 "trtype": "$TEST_TRANSPORT", 00:18:58.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:58.353 "adrfam": "ipv4", 00:18:58.353 "trsvcid": "$NVMF_PORT", 00:18:58.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:58.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:58.353 "hdgst": ${hdgst:-false}, 00:18:58.353 "ddgst": ${ddgst:-false} 00:18:58.353 }, 00:18:58.353 "method": "bdev_nvme_attach_controller" 00:18:58.353 } 00:18:58.353 EOF 00:18:58.353 )") 00:18:58.353 23:04:30 -- nvmf/common.sh@542 -- # cat 00:18:58.353 23:04:30 -- nvmf/common.sh@544 -- # jq . 00:18:58.353 23:04:30 -- nvmf/common.sh@545 -- # IFS=, 00:18:58.353 23:04:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:58.353 "params": { 00:18:58.353 "name": "Nvme1", 00:18:58.353 "trtype": "tcp", 00:18:58.353 "traddr": "10.0.0.2", 00:18:58.353 "adrfam": "ipv4", 00:18:58.353 "trsvcid": "4420", 00:18:58.353 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.353 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:58.353 "hdgst": false, 00:18:58.353 "ddgst": false 00:18:58.353 }, 00:18:58.353 "method": "bdev_nvme_attach_controller" 00:18:58.353 }' 00:18:58.353 [2024-07-24 23:04:30.559076] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:58.353 [2024-07-24 23:04:30.559125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3219604 ] 00:18:58.353 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.353 [2024-07-24 23:04:30.630270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.353 [2024-07-24 23:04:30.666625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.612 Running I/O for 10 seconds... 00:19:08.592 00:19:08.592 Latency(us) 00:19:08.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.592 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:19:08.592 Verification LBA range: start 0x0 length 0x1000 00:19:08.592 Nvme1n1 : 10.01 13589.95 106.17 0.00 0.00 9396.78 1258.29 15623.78 00:19:08.592 =================================================================================================================== 00:19:08.592 Total : 13589.95 106.17 0.00 0.00 9396.78 1258.29 15623.78 00:19:08.852 23:04:41 -- target/zcopy.sh@39 -- # perfpid=3221457 00:19:08.852 23:04:41 -- target/zcopy.sh@41 -- # xtrace_disable 00:19:08.852 23:04:41 -- common/autotest_common.sh@10 -- # set +x 00:19:08.852 23:04:41 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:19:08.852 23:04:41 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:19:08.852 23:04:41 -- nvmf/common.sh@520 -- # config=() 00:19:08.852 23:04:41 -- nvmf/common.sh@520 -- # local subsystem config 00:19:08.852 23:04:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:08.852 23:04:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:08.852 { 00:19:08.853 "params": { 00:19:08.853 "name": "Nvme$subsystem", 00:19:08.853 "trtype": "$TEST_TRANSPORT", 00:19:08.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.853 "adrfam": "ipv4", 00:19:08.853 "trsvcid": "$NVMF_PORT", 00:19:08.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.853 "hdgst": ${hdgst:-false}, 00:19:08.853 "ddgst": ${ddgst:-false} 00:19:08.853 }, 00:19:08.853 "method": "bdev_nvme_attach_controller" 00:19:08.853 } 00:19:08.853 EOF 00:19:08.853 )") 00:19:08.853 [2024-07-24 23:04:41.150411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.853 [2024-07-24 23:04:41.150443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.853 23:04:41 -- nvmf/common.sh@542 -- # cat 00:19:08.853 23:04:41 -- nvmf/common.sh@544 -- # jq . 00:19:08.853 23:04:41 -- nvmf/common.sh@545 -- # IFS=, 00:19:08.853 23:04:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:08.853 "params": { 00:19:08.853 "name": "Nvme1", 00:19:08.853 "trtype": "tcp", 00:19:08.853 "traddr": "10.0.0.2", 00:19:08.853 "adrfam": "ipv4", 00:19:08.853 "trsvcid": "4420", 00:19:08.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:08.853 "hdgst": false, 00:19:08.853 "ddgst": false 00:19:08.853 }, 00:19:08.853 "method": "bdev_nvme_attach_controller" 00:19:08.853 }' 00:19:08.853 [2024-07-24 23:04:41.162417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.853 [2024-07-24 23:04:41.162432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.853 [2024-07-24 23:04:41.174447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.853 [2024-07-24 23:04:41.174459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.853 [2024-07-24 23:04:41.186478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.853 [2024-07-24 23:04:41.186490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.853 [2024-07-24 23:04:41.189773] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:08.853 [2024-07-24 23:04:41.189834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3221457 ] 00:19:08.853 [2024-07-24 23:04:41.198509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.853 [2024-07-24 23:04:41.198521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.853 [2024-07-24 23:04:41.210542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.853 [2024-07-24 23:04:41.210553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.853 [2024-07-24 23:04:41.222574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.853 [2024-07-24 23:04:41.222586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.853 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.853 [2024-07-24 23:04:41.234607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.853 [2024-07-24 23:04:41.234619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.853 [2024-07-24 23:04:41.246640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.853 [2024-07-24 23:04:41.246651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.853 [2024-07-24 23:04:41.258671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.853 [2024-07-24 23:04:41.258682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.853 [2024-07-24 23:04:41.261017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.853 [2024-07-24 23:04:41.270705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.853 [2024-07-24 23:04:41.270721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.853 [2024-07-24 23:04:41.282748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.853 [2024-07-24 23:04:41.282773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.113 [2024-07-24 23:04:41.294782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.113 [2024-07-24 23:04:41.294797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.113 [2024-07-24 23:04:41.297230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.113 [2024-07-24 23:04:41.306807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.113 [2024-07-24 23:04:41.306826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.113 [2024-07-24 23:04:41.318842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.113 [2024-07-24 23:04:41.318861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.113 [2024-07-24 23:04:41.330870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.113 [2024-07-24 23:04:41.330890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.113 [2024-07-24 23:04:41.342903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.113 [2024-07-24 23:04:41.342917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.113 [2024-07-24 23:04:41.354934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.113 [2024-07-24 23:04:41.354947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.113 [2024-07-24 23:04:41.366973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.113 [2024-07-24 23:04:41.366984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.113 [2024-07-24 23:04:41.379010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.113 [2024-07-24 23:04:41.379031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.113 [2024-07-24 23:04:41.391036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.113 [2024-07-24 23:04:41.391050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.113 [2024-07-24 23:04:41.403069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.113 [2024-07-24 23:04:41.403084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.113 [2024-07-24 23:04:41.415099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.113 [2024-07-24 23:04:41.415110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.113 [2024-07-24 23:04:41.427133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.113 [2024-07-24 23:04:41.427143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.113 [2024-07-24 23:04:41.439165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.113 [2024-07-24 23:04:41.439176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.113 [2024-07-24 23:04:41.451218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.113 [2024-07-24 23:04:41.451233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.113 [2024-07-24 23:04:41.463251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.113 [2024-07-24 23:04:41.463266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.113 [2024-07-24 23:04:41.475281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.113 [2024-07-24 23:04:41.475292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.113 [2024-07-24 23:04:41.487317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.113 [2024-07-24 23:04:41.487334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.113 Running I/O for 5 seconds... 00:19:09.113 [2024-07-24 23:04:41.503893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.113 [2024-07-24 23:04:41.503914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.113 [2024-07-24 23:04:41.518650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.113 [2024-07-24 23:04:41.518673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.113 [2024-07-24 23:04:41.532585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.113 [2024-07-24 23:04:41.532606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.373 [2024-07-24 23:04:41.546078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.373 [2024-07-24 23:04:41.546099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.373 [2024-07-24 23:04:41.559247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.373 [2024-07-24 23:04:41.559267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.373 [2024-07-24 23:04:41.572346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.373 [2024-07-24 23:04:41.572369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.373 [2024-07-24 23:04:41.585610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.373 [2024-07-24 23:04:41.585630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.373 [2024-07-24 23:04:41.598643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.373 [2024-07-24 23:04:41.598662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.373 [2024-07-24 23:04:41.611649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.373 [2024-07-24 23:04:41.611669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.373 [2024-07-24 23:04:41.624399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.373 [2024-07-24 23:04:41.624418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.373 [2024-07-24 23:04:41.638141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.373 [2024-07-24 23:04:41.638161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.373 [2024-07-24 23:04:41.651242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.373 [2024-07-24 23:04:41.651263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.373 [2024-07-24 23:04:41.664632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.373 [2024-07-24 23:04:41.664652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.373 [2024-07-24 23:04:41.673247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.373 [2024-07-24 23:04:41.673267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.373 [2024-07-24 23:04:41.682252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.373 [2024-07-24 23:04:41.682272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.373 [2024-07-24 23:04:41.695875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.373 [2024-07-24 23:04:41.695895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.373 [2024-07-24 23:04:41.709219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.373 [2024-07-24 23:04:41.709239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.373 [2024-07-24 23:04:41.721791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.373 [2024-07-24 23:04:41.721810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.373 [2024-07-24 23:04:41.730305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.373 [2024-07-24 23:04:41.730325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.373 [2024-07-24 23:04:41.743907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.373 [2024-07-24 23:04:41.743927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.373 [2024-07-24 23:04:41.757286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.373 [2024-07-24 23:04:41.757306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.373 [2024-07-24 23:04:41.770453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.373 [2024-07-24 23:04:41.770474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.373 [2024-07-24 23:04:41.784510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.373 [2024-07-24 23:04:41.784529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.373 [2024-07-24 23:04:41.795882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.373 [2024-07-24 23:04:41.795902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.664 [2024-07-24 23:04:41.809435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.664 [2024-07-24 23:04:41.809456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.664 [2024-07-24 23:04:41.817816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.664 [2024-07-24 23:04:41.817837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.664 [2024-07-24 23:04:41.827087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.664 [2024-07-24 23:04:41.827106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.664 [2024-07-24 23:04:41.840760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.664 [2024-07-24 23:04:41.840781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.664 [2024-07-24 23:04:41.854954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.664 [2024-07-24 23:04:41.854974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.664 [2024-07-24 23:04:41.868379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.664 [2024-07-24 23:04:41.868400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.665 [2024-07-24 23:04:41.876695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.665 [2024-07-24 23:04:41.876719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.665 [2024-07-24 23:04:41.890402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.665 [2024-07-24 23:04:41.890423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.665 [2024-07-24 23:04:41.903517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.665 [2024-07-24 23:04:41.903537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.665 [2024-07-24 23:04:41.917242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.665 [2024-07-24 23:04:41.917262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.665 [2024-07-24 23:04:41.930554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.665 [2024-07-24 23:04:41.930574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.665 [2024-07-24 23:04:41.944242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.665 [2024-07-24 23:04:41.944262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.665 [2024-07-24 23:04:41.952729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.665 [2024-07-24 23:04:41.952765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.665 [2024-07-24 23:04:41.966823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.665 [2024-07-24 23:04:41.966843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.665 [2024-07-24 23:04:41.980144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.665 [2024-07-24 23:04:41.980164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.665 [2024-07-24 23:04:41.993816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.665 [2024-07-24 23:04:41.993836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.665 [2024-07-24 23:04:42.007516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.665 [2024-07-24 23:04:42.007536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.665 [2024-07-24 23:04:42.019138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.665 [2024-07-24 23:04:42.019158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.665 [2024-07-24 23:04:42.032708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.665 [2024-07-24 23:04:42.032733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.665 [2024-07-24 23:04:42.046523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.665 [2024-07-24 23:04:42.046543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.665 [2024-07-24 23:04:42.059684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.665 [2024-07-24 23:04:42.059705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.073193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.073214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.086722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.086742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.101084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.101103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.111730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.111751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.125117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.125137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.137991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.138011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.146919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.146939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.160805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.160825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.169318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.169340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.183117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.183138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.196425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.196447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.209782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.209804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.222818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.222839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.236355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.236377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.249698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.249725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.263145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.263166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.276719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.276755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.288710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.288735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.302156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.302177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.310506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.310526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.324138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.324159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.332586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.332606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.341683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.341704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.938 [2024-07-24 23:04:42.355228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.938 [2024-07-24 23:04:42.355248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.198 [2024-07-24 23:04:42.368339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.198 [2024-07-24 23:04:42.368361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.198 [2024-07-24 23:04:42.382032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.198 [2024-07-24 23:04:42.382053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.198 [2024-07-24 23:04:42.395292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.198 [2024-07-24 23:04:42.395312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.198 [2024-07-24 23:04:42.408863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.198 [2024-07-24 23:04:42.408883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.198 [2024-07-24 23:04:42.422165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.198 [2024-07-24 23:04:42.422186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.198 [2024-07-24 23:04:42.435421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.198 [2024-07-24 23:04:42.435443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.198 [2024-07-24 23:04:42.448517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.198 [2024-07-24 23:04:42.448538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.198 [2024-07-24 23:04:42.456852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.198 [2024-07-24 23:04:42.456873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.198 [2024-07-24 23:04:42.470298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.198 [2024-07-24 23:04:42.470319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.198 [2024-07-24 23:04:42.478839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.198 [2024-07-24 23:04:42.478859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.198 [2024-07-24 23:04:42.492705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.198 [2024-07-24 23:04:42.492731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.198 [2024-07-24 23:04:42.506094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.198 [2024-07-24 23:04:42.506115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.198 [2024-07-24 23:04:42.519231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.198 [2024-07-24 23:04:42.519251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.198 [2024-07-24 23:04:42.532096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.198 [2024-07-24 23:04:42.532118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.198 [2024-07-24 23:04:42.545483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.198 [2024-07-24 23:04:42.545505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.198 [2024-07-24 23:04:42.558698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.198 [2024-07-24 23:04:42.558725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.198 [2024-07-24 23:04:42.571687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.198 [2024-07-24 23:04:42.571708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.198 [2024-07-24 23:04:42.584901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.198 [2024-07-24 23:04:42.584922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.198 [2024-07-24 23:04:42.598705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.198 [2024-07-24 23:04:42.598731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.198 [2024-07-24 23:04:42.612329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.198 [2024-07-24 23:04:42.612349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.198 [2024-07-24 23:04:42.625870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.198 [2024-07-24 23:04:42.625891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.458 [2024-07-24 23:04:42.639301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.458 [2024-07-24 23:04:42.639322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.458 [2024-07-24 23:04:42.652522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.458 [2024-07-24 23:04:42.652542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.458 [2024-07-24 23:04:42.665536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.458 [2024-07-24 23:04:42.665557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.458 [2024-07-24 23:04:42.678830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.458 [2024-07-24 23:04:42.678851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.458 [2024-07-24 23:04:42.692635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.458 [2024-07-24 23:04:42.692654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.458 [2024-07-24 23:04:42.707832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.458 [2024-07-24 23:04:42.707852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.458 [2024-07-24 23:04:42.721256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.458 [2024-07-24 23:04:42.721277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.458 [2024-07-24 23:04:42.734798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.458 [2024-07-24 23:04:42.734819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.458 [2024-07-24 23:04:42.747913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.458 [2024-07-24 23:04:42.747934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.458 [2024-07-24 23:04:42.756520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.458 [2024-07-24 23:04:42.756546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.458 [2024-07-24 23:04:42.770381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.458 [2024-07-24 23:04:42.770401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.458 [2024-07-24 23:04:42.783203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.458 [2024-07-24 23:04:42.783223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.458 [2024-07-24 23:04:42.796805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.458 [2024-07-24 23:04:42.796826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.458 [2024-07-24 23:04:42.809924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.458 [2024-07-24 23:04:42.809945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.458 [2024-07-24 23:04:42.822835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.458 [2024-07-24 23:04:42.822854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.458 [2024-07-24 23:04:42.831240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.458 [2024-07-24 23:04:42.831260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.458 [2024-07-24 23:04:42.840184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.458 [2024-07-24 23:04:42.840205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.458 [2024-07-24 23:04:42.853283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.458 [2024-07-24 23:04:42.853305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.458 [2024-07-24 23:04:42.861552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.458 [2024-07-24 23:04:42.861572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.458 [2024-07-24 23:04:42.870851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.458 [2024-07-24 23:04:42.870882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.458 [2024-07-24 23:04:42.884755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.458 [2024-07-24 23:04:42.884775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.718 [2024-07-24 23:04:42.898118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.718 [2024-07-24 23:04:42.898139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.718 [2024-07-24 23:04:42.911516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.718 [2024-07-24 23:04:42.911536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.718 [2024-07-24 23:04:42.925251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.718 [2024-07-24 23:04:42.925271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.718 [2024-07-24 23:04:42.939010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.718 [2024-07-24 23:04:42.939030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.718 [2024-07-24 23:04:42.952920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.718 [2024-07-24 23:04:42.952940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.718 [2024-07-24 23:04:42.965985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.718 [2024-07-24 23:04:42.966005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.718 [2024-07-24 23:04:42.979113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.718 [2024-07-24 23:04:42.979133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.718 [2024-07-24 23:04:42.992603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.718 [2024-07-24 23:04:42.992627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.718 [2024-07-24 23:04:43.005971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.718 [2024-07-24 23:04:43.005991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.718 [2024-07-24 23:04:43.019808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.718 [2024-07-24 23:04:43.019829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.718 [2024-07-24 23:04:43.033268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.718 [2024-07-24 23:04:43.033288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.718 [2024-07-24 23:04:43.046597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.718 [2024-07-24 23:04:43.046617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.718 [2024-07-24 23:04:43.059931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.718 [2024-07-24 23:04:43.059951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.718 [2024-07-24 23:04:43.072736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.718 [2024-07-24 23:04:43.072756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.718 [2024-07-24 23:04:43.085702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.718 [2024-07-24 23:04:43.085727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.718 [2024-07-24 23:04:43.098820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.718 [2024-07-24 23:04:43.098840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.718 [2024-07-24 23:04:43.111832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.718 [2024-07-24 23:04:43.111851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.718 [2024-07-24 23:04:43.125225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.718 [2024-07-24 23:04:43.125245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.718 [2024-07-24 23:04:43.139414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.718 [2024-07-24 23:04:43.139434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.977 [2024-07-24 23:04:43.152502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.977 [2024-07-24 23:04:43.152522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.977 [2024-07-24 23:04:43.165314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.977 [2024-07-24 23:04:43.165334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.977 [2024-07-24 23:04:43.178633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.977 [2024-07-24 23:04:43.178653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.977 [2024-07-24 23:04:43.191420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.977 [2024-07-24 23:04:43.191439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.977 [2024-07-24 23:04:43.203945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.977 [2024-07-24 23:04:43.203965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.977 [2024-07-24 23:04:43.218143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.977 [2024-07-24 23:04:43.218163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.977 [2024-07-24 23:04:43.229727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.977 [2024-07-24 23:04:43.229746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.977 [2024-07-24 23:04:43.243323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.978 [2024-07-24 23:04:43.243347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.978 [2024-07-24 23:04:43.256865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.978 [2024-07-24 23:04:43.256895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.978 [2024-07-24 23:04:43.270769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.978 [2024-07-24 23:04:43.270791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.978 [2024-07-24 23:04:43.283981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.978 [2024-07-24 23:04:43.284001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.978 [2024-07-24 23:04:43.292312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.978 [2024-07-24 23:04:43.292332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.978 [2024-07-24 23:04:43.305673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.978 [2024-07-24 23:04:43.305694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.978 [2024-07-24 23:04:43.318606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.978 [2024-07-24 23:04:43.318626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.978 [2024-07-24 23:04:43.331760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.978 [2024-07-24 23:04:43.331780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.978 [2024-07-24 23:04:43.340287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.978 [2024-07-24 23:04:43.340307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.978 [2024-07-24 23:04:43.354566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.978 [2024-07-24 23:04:43.354586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.978 [2024-07-24 23:04:43.366217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.978 [2024-07-24 23:04:43.366237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.978 [2024-07-24 23:04:43.379363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.978 [2024-07-24 23:04:43.379382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.978 [2024-07-24 23:04:43.392471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.978 [2024-07-24 23:04:43.392490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.978 [2024-07-24 23:04:43.401321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.978 [2024-07-24 23:04:43.401340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.237 [2024-07-24 23:04:43.415327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.237 [2024-07-24 23:04:43.415347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.237 [2024-07-24 23:04:43.428856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.237 [2024-07-24 23:04:43.428876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.237 [2024-07-24 23:04:43.442555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.237 [2024-07-24 23:04:43.442575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.237 [2024-07-24 23:04:43.455504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.237 [2024-07-24 23:04:43.455524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.237 [2024-07-24 23:04:43.468178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.237 [2024-07-24 23:04:43.468197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.237 [2024-07-24 23:04:43.481341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.237 [2024-07-24 23:04:43.481364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.237 [2024-07-24 23:04:43.494793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.237 [2024-07-24 23:04:43.494813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.237 [2024-07-24 23:04:43.508074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.237 [2024-07-24 23:04:43.508094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.237 [2024-07-24 23:04:43.520998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.237 [2024-07-24 23:04:43.521018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.237 [2024-07-24 23:04:43.533996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.237 [2024-07-24 23:04:43.534016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.237 [2024-07-24 23:04:43.547239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.237 [2024-07-24 23:04:43.547259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.237 [2024-07-24 23:04:43.560437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.237 [2024-07-24 23:04:43.560458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.237 [2024-07-24 23:04:43.573772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.237 [2024-07-24 23:04:43.573793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.237 [2024-07-24 23:04:43.587510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.237 [2024-07-24 23:04:43.587530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.237 [2024-07-24 23:04:43.596004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.237 [2024-07-24 23:04:43.596024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.237 [2024-07-24 23:04:43.609449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.237 [2024-07-24 23:04:43.609469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.237 [2024-07-24 23:04:43.623296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.237 [2024-07-24 23:04:43.623316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.237 [2024-07-24 23:04:43.636652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.237 [2024-07-24 23:04:43.636672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.237 [2024-07-24 23:04:43.650314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.237 [2024-07-24 23:04:43.650333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.237 [2024-07-24 23:04:43.658729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.237 [2024-07-24 23:04:43.658749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.672659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.672679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.685410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.685430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.698707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.698732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.707116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.707136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.716211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.716231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.726155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.726180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.739686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.739707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.753330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.753351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.761547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.761567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.770346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.770366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.784131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.784154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.796982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.797002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.809983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.810005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.818252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.818272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.827511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.827532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.841409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.841430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.854609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.854630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.862978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.862999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.876637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.876657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.885296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.885316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.894167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.894187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.907833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.907853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.496 [2024-07-24 23:04:43.921628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.496 [2024-07-24 23:04:43.921648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.755 [2024-07-24 23:04:43.935460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.755 [2024-07-24 23:04:43.935481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.755 [2024-07-24 23:04:43.949136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.755 [2024-07-24 23:04:43.949156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.755 [2024-07-24 23:04:43.962443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.755 [2024-07-24 23:04:43.962463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.755 [2024-07-24 23:04:43.976583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.755 [2024-07-24 23:04:43.976604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.755 [2024-07-24 23:04:43.988048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.755 [2024-07-24 23:04:43.988068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.755 [2024-07-24 23:04:44.001120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.755 [2024-07-24 23:04:44.001140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.755 [2024-07-24 23:04:44.009333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.755 [2024-07-24 23:04:44.009353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.755 [2024-07-24 23:04:44.018004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.755 [2024-07-24 23:04:44.018024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.755 [2024-07-24 23:04:44.027082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.755 [2024-07-24 23:04:44.027102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.755 [2024-07-24 23:04:44.040849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.755 [2024-07-24 23:04:44.040870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.755 [2024-07-24 23:04:44.053815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.755 [2024-07-24 23:04:44.053835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.755 [2024-07-24 23:04:44.066743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.755 [2024-07-24 23:04:44.066763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.756 [2024-07-24 23:04:44.079520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.756 [2024-07-24 23:04:44.079541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.756 [2024-07-24 23:04:44.092813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.756 [2024-07-24 23:04:44.092834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.756 [2024-07-24 23:04:44.106392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.756 [2024-07-24 23:04:44.106414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.756 [2024-07-24 23:04:44.119611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.756 [2024-07-24 23:04:44.119631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.756 [2024-07-24 23:04:44.133413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.756 [2024-07-24 23:04:44.133433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.756 [2024-07-24 23:04:44.146488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.756 [2024-07-24 23:04:44.146508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.756 [2024-07-24 23:04:44.159645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.756 [2024-07-24 23:04:44.159665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.756 [2024-07-24 23:04:44.173599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.756 [2024-07-24 23:04:44.173620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.015 [2024-07-24 23:04:44.186265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.015 [2024-07-24 23:04:44.186286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.015 [2024-07-24 23:04:44.199438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.015 [2024-07-24 23:04:44.199458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.015 [2024-07-24 23:04:44.212490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.015 [2024-07-24 23:04:44.212510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.015 [2024-07-24 23:04:44.225908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.015 [2024-07-24 23:04:44.225928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.015 [2024-07-24 23:04:44.239047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.015 [2024-07-24 23:04:44.239068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.015 [2024-07-24 23:04:44.252181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.015 [2024-07-24 23:04:44.252201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.015 [2024-07-24 23:04:44.266251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.015 [2024-07-24 23:04:44.266272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.015 [2024-07-24 23:04:44.278035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.015 [2024-07-24 23:04:44.278056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.015 [2024-07-24 23:04:44.291376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.015 [2024-07-24 23:04:44.291396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.015 [2024-07-24 23:04:44.300050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.015 [2024-07-24 23:04:44.300071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.015 [2024-07-24 23:04:44.314321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.015 [2024-07-24 23:04:44.314343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.015 [2024-07-24 23:04:44.327220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.015 [2024-07-24 23:04:44.327241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.015 [2024-07-24 23:04:44.340578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.015 [2024-07-24 23:04:44.340599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.015 [2024-07-24 23:04:44.354026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.015 [2024-07-24 23:04:44.354047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.015 [2024-07-24 23:04:44.367491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.015 [2024-07-24 23:04:44.367511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.015 [2024-07-24 23:04:44.380643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.015 [2024-07-24 23:04:44.380662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.015 [2024-07-24 23:04:44.393992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.015 [2024-07-24 23:04:44.394012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.015 [2024-07-24 23:04:44.407009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.015 [2024-07-24 23:04:44.407029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.015 [2024-07-24 23:04:44.420650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.015 [2024-07-24 23:04:44.420671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.015 [2024-07-24 23:04:44.433786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.015 [2024-07-24 23:04:44.433806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.275 [2024-07-24 23:04:44.447605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.275 [2024-07-24 23:04:44.447626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.275 [2024-07-24 23:04:44.463239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.275 [2024-07-24 23:04:44.463259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.275 [2024-07-24 23:04:44.477658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.275 [2024-07-24 23:04:44.477677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.275 [2024-07-24 23:04:44.489019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.275 [2024-07-24 23:04:44.489039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.275 [2024-07-24 23:04:44.502746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.275 [2024-07-24 23:04:44.502767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.275 [2024-07-24 23:04:44.515965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.275 [2024-07-24 23:04:44.515985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.275 [2024-07-24 23:04:44.529243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.275 [2024-07-24 23:04:44.529262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.275 [2024-07-24 23:04:44.542610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.275 [2024-07-24 23:04:44.542630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.275 [2024-07-24 23:04:44.556428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.275 [2024-07-24 23:04:44.556448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.275 [2024-07-24 23:04:44.567684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.275 [2024-07-24 23:04:44.567704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.275 [2024-07-24 23:04:44.582246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.275 [2024-07-24 23:04:44.582267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.275 [2024-07-24 23:04:44.593001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.275 [2024-07-24 23:04:44.593022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.275 [2024-07-24 23:04:44.601666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.275 [2024-07-24 23:04:44.601686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.275 [2024-07-24 23:04:44.610469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.275 [2024-07-24 23:04:44.610488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.275 [2024-07-24 23:04:44.624312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.275 [2024-07-24 23:04:44.624331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.275 [2024-07-24 23:04:44.637532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.275 [2024-07-24 23:04:44.637552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.275 [2024-07-24 23:04:44.650502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.275 [2024-07-24 23:04:44.650525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.275 [2024-07-24 23:04:44.663617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.275 [2024-07-24 23:04:44.663637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.275 [2024-07-24 23:04:44.677445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.275 [2024-07-24 23:04:44.677465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.275 [2024-07-24 23:04:44.689112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.275 [2024-07-24 23:04:44.689132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.275 [2024-07-24 23:04:44.702896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.275 [2024-07-24 23:04:44.702916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.535 [2024-07-24 23:04:44.716063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.535 [2024-07-24 23:04:44.716084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.535 [2024-07-24 23:04:44.729442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.535 [2024-07-24 23:04:44.729463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.535 [2024-07-24 23:04:44.742407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.535 [2024-07-24 23:04:44.742428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.535 [2024-07-24 23:04:44.755726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.535 [2024-07-24 23:04:44.755746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.535 [2024-07-24 23:04:44.764105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.535 [2024-07-24 23:04:44.764126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.535 [2024-07-24 23:04:44.772909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.535 [2024-07-24 23:04:44.772928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.535 [2024-07-24 23:04:44.786485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.535 [2024-07-24 23:04:44.786505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.535 [2024-07-24 23:04:44.799736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.535 [2024-07-24 23:04:44.799756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.535 [2024-07-24 23:04:44.812460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.535 [2024-07-24 23:04:44.812479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.535 [2024-07-24 23:04:44.825573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.535 [2024-07-24 23:04:44.825593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.535 [2024-07-24 23:04:44.839251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.535 [2024-07-24 23:04:44.839271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.535 [2024-07-24 23:04:44.852260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.535 [2024-07-24 23:04:44.852279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.535 [2024-07-24 23:04:44.866418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.535 [2024-07-24 23:04:44.866438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.535 [2024-07-24 23:04:44.878933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.535 [2024-07-24 23:04:44.878953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.535 [2024-07-24 23:04:44.892079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.535 [2024-07-24 23:04:44.892103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.535 [2024-07-24 23:04:44.904783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.535 [2024-07-24 23:04:44.904803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.535 [2024-07-24 23:04:44.918601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.535 [2024-07-24 23:04:44.918621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.535 [2024-07-24 23:04:44.931744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.535 [2024-07-24 23:04:44.931763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.535 [2024-07-24 23:04:44.944945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.535 [2024-07-24 23:04:44.944965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.535 [2024-07-24 23:04:44.953088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.535 [2024-07-24 23:04:44.953106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.535 [2024-07-24 23:04:44.961709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.535 [2024-07-24 23:04:44.961734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.795 [2024-07-24 23:04:44.975497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.795 [2024-07-24 23:04:44.975517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.795 [2024-07-24 23:04:44.988298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.795 [2024-07-24 23:04:44.988317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.795 [2024-07-24 23:04:45.001567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.795 [2024-07-24 23:04:45.001587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.795 [2024-07-24 23:04:45.015729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.795 [2024-07-24 23:04:45.015765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.795 [2024-07-24 23:04:45.029934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.795 [2024-07-24 23:04:45.029954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.795 [2024-07-24 23:04:45.042684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.795 [2024-07-24 23:04:45.042704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.795 [2024-07-24 23:04:45.055837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.795 [2024-07-24 23:04:45.055858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.795 [2024-07-24 23:04:45.068626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.795 [2024-07-24 23:04:45.068646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.795 [2024-07-24 23:04:45.082010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.795 [2024-07-24 23:04:45.082030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.795 [2024-07-24 23:04:45.095560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.795 [2024-07-24 23:04:45.095581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.795 [2024-07-24 23:04:45.108921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.795 [2024-07-24 23:04:45.108941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.795 [2024-07-24 23:04:45.122934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.795 [2024-07-24 23:04:45.122953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.795 [2024-07-24 23:04:45.137119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.795 [2024-07-24 23:04:45.137143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.795 [2024-07-24 23:04:45.148398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.795 [2024-07-24 23:04:45.148419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.795 [2024-07-24 23:04:45.161756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.795 [2024-07-24 23:04:45.161775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.795 [2024-07-24 23:04:45.175950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.795 [2024-07-24 23:04:45.175970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.795 [2024-07-24 23:04:45.187141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.795 [2024-07-24 23:04:45.187161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.795 [2024-07-24 23:04:45.201506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.795 [2024-07-24 23:04:45.201525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.795 [2024-07-24 23:04:45.212683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.795 [2024-07-24 23:04:45.212703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.055 [2024-07-24 23:04:45.226894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.055 [2024-07-24 23:04:45.226914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.055 [2024-07-24 23:04:45.238197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.055 [2024-07-24 23:04:45.238217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.055 [2024-07-24 23:04:45.251299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.055 [2024-07-24 23:04:45.251319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.055 [2024-07-24 23:04:45.263879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.055 [2024-07-24 23:04:45.263899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.055 [2024-07-24 23:04:45.277481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.055 [2024-07-24 23:04:45.277501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.055 [2024-07-24 23:04:45.285838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.055 [2024-07-24 23:04:45.285857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.055 [2024-07-24 23:04:45.299486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.055 [2024-07-24 23:04:45.299506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.055 [2024-07-24 23:04:45.312821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.055 [2024-07-24 23:04:45.312841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.055 [2024-07-24 23:04:45.326755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.055 [2024-07-24 23:04:45.326775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.055 [2024-07-24 23:04:45.337968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.055 [2024-07-24 23:04:45.337988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.055 [2024-07-24 23:04:45.351576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.055 [2024-07-24 23:04:45.351598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.055 [2024-07-24 23:04:45.360293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.055 [2024-07-24 23:04:45.360314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.055 [2024-07-24 23:04:45.373883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.055 [2024-07-24 23:04:45.373908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.055 [2024-07-24 23:04:45.386772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.055 [2024-07-24 23:04:45.386794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.055 [2024-07-24 23:04:45.399789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.055 [2024-07-24 23:04:45.399810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.055 [2024-07-24 23:04:45.412792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.055 [2024-07-24 23:04:45.412813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.055 [2024-07-24 23:04:45.426459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.055 [2024-07-24 23:04:45.426481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.055 [2024-07-24 23:04:45.439612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.055 [2024-07-24 23:04:45.439632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.055 [2024-07-24 23:04:45.453182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.055 [2024-07-24 23:04:45.453203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.055 [2024-07-24 23:04:45.466275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.055 [2024-07-24 23:04:45.466297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.055 [2024-07-24 23:04:45.479378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.055 [2024-07-24 23:04:45.479399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.492539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.492559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.505563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.505583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.513831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.513852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.522693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.522721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.536146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.536167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.549347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.549368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.562670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.562690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.575878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.575898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.584427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.584447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.593344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.593364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.602250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.602271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.611532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.611553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.620039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.620059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.633828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.633848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.647260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.647280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.661238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.661259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.674437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.674459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.688117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.688138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.701112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.701132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.715293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.715314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.726721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.726742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.315 [2024-07-24 23:04:45.740290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.315 [2024-07-24 23:04:45.740310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.574 [2024-07-24 23:04:45.753794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.574 [2024-07-24 23:04:45.753815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.574 [2024-07-24 23:04:45.766867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.574 [2024-07-24 23:04:45.766888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.574 [2024-07-24 23:04:45.780371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.574 [2024-07-24 23:04:45.780391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.574 [2024-07-24 23:04:45.793374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.574 [2024-07-24 23:04:45.793395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.574 [2024-07-24 23:04:45.806818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.574 [2024-07-24 23:04:45.806839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.574 [2024-07-24 23:04:45.820781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.574 [2024-07-24 23:04:45.820800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.574 [2024-07-24 23:04:45.833968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.574 [2024-07-24 23:04:45.833988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.574 [2024-07-24 23:04:45.846781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.574 [2024-07-24 23:04:45.846802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.574 [2024-07-24 23:04:45.859651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.574 [2024-07-24 23:04:45.859671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.574 [2024-07-24 23:04:45.872781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.574 [2024-07-24 23:04:45.872802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.574 [2024-07-24 23:04:45.885856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.574 [2024-07-24 23:04:45.885877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.574 [2024-07-24 23:04:45.899042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.574 [2024-07-24 23:04:45.899063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.574 [2024-07-24 23:04:45.911970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.574 [2024-07-24 23:04:45.911991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.574 [2024-07-24 23:04:45.925854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.574 [2024-07-24 23:04:45.925885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.574 [2024-07-24 23:04:45.939976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.574 [2024-07-24 23:04:45.939996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.574 [2024-07-24 23:04:45.951383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.574 [2024-07-24 23:04:45.951402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.574 [2024-07-24 23:04:45.965151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.574 [2024-07-24 23:04:45.965171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.574 [2024-07-24 23:04:45.978372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.574 [2024-07-24 23:04:45.978392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.574 [2024-07-24 23:04:45.990966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.574 [2024-07-24 23:04:45.990986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.574 [2024-07-24 23:04:46.004352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.574 [2024-07-24 23:04:46.004372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.833 [2024-07-24 23:04:46.018077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.833 [2024-07-24 23:04:46.018097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.833 [2024-07-24 23:04:46.032323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.833 [2024-07-24 23:04:46.032342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.833 [2024-07-24 23:04:46.042803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.833 [2024-07-24 23:04:46.042823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.833 [2024-07-24 23:04:46.051539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.833 [2024-07-24 23:04:46.051559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.833 [2024-07-24 23:04:46.065641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.833 [2024-07-24 23:04:46.065661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.833 [2024-07-24 23:04:46.079037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.833 [2024-07-24 23:04:46.079058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.833 [2024-07-24 23:04:46.093134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.833 [2024-07-24 23:04:46.093155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.833 [2024-07-24 23:04:46.108578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.833 [2024-07-24 23:04:46.108599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.833 [2024-07-24 23:04:46.117048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.833 [2024-07-24 23:04:46.117068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.833 [2024-07-24 23:04:46.130632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.833 [2024-07-24 23:04:46.130652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.833 [2024-07-24 23:04:46.138845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.833 [2024-07-24 23:04:46.138864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.833 [2024-07-24 23:04:46.152295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.833 [2024-07-24 23:04:46.152315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.833 [2024-07-24 23:04:46.165846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.833 [2024-07-24 23:04:46.165866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.833 [2024-07-24 23:04:46.179016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.833 [2024-07-24 23:04:46.179035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.833 [2024-07-24 23:04:46.192294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.833 [2024-07-24 23:04:46.192314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.833 [2024-07-24 23:04:46.207076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.833 [2024-07-24 23:04:46.207096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.833 [2024-07-24 23:04:46.217329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.833 [2024-07-24 23:04:46.217349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.833 [2024-07-24 23:04:46.231639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.833 [2024-07-24 23:04:46.231660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.833 [2024-07-24 23:04:46.239998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.833 [2024-07-24 23:04:46.240017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.833 [2024-07-24 23:04:46.254138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.833 [2024-07-24 23:04:46.254158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.092 [2024-07-24 23:04:46.267458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.092 [2024-07-24 23:04:46.267480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.092 [2024-07-24 23:04:46.280530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.092 [2024-07-24 23:04:46.280550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.092 [2024-07-24 23:04:46.293962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.092 [2024-07-24 23:04:46.293982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.092 [2024-07-24 23:04:46.307614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.092 [2024-07-24 23:04:46.307634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.092 [2024-07-24 23:04:46.321257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.092 [2024-07-24 23:04:46.321278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.092 [2024-07-24 23:04:46.335110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.092 [2024-07-24 23:04:46.335130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.092 [2024-07-24 23:04:46.347191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.092 [2024-07-24 23:04:46.347212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.092 [2024-07-24 23:04:46.360605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.092 [2024-07-24 23:04:46.360625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.092 [2024-07-24 23:04:46.373691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.092 [2024-07-24 23:04:46.373710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.092 [2024-07-24 23:04:46.387745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.092 [2024-07-24 23:04:46.387765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.092 [2024-07-24 23:04:46.396070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.092 [2024-07-24 23:04:46.396089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.092 [2024-07-24 23:04:46.410339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.092 [2024-07-24 23:04:46.410360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.092 [2024-07-24 23:04:46.422126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.092 [2024-07-24 23:04:46.422146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.092 [2024-07-24 23:04:46.435544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.092 [2024-07-24 23:04:46.435564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.092 [2024-07-24 23:04:46.448647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.092 [2024-07-24 23:04:46.448667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.092 [2024-07-24 23:04:46.462736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.092 [2024-07-24 23:04:46.462756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.092 [2024-07-24 23:04:46.474314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.092 [2024-07-24 23:04:46.474334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.092 [2024-07-24 23:04:46.488167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.092 [2024-07-24 23:04:46.488187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.092 [2024-07-24 23:04:46.502195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.092 [2024-07-24 23:04:46.502215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.092 00:19:14.092 Latency(us) 00:19:14.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.092 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:14.092 Nvme1n1 : 5.01 18013.25 140.73 0.00 0.00 7100.56 2411.72 21915.24 00:19:14.092 =================================================================================================================== 00:19:14.092 Total : 18013.25 140.73 0.00 0.00 7100.56 2411.72 21915.24 00:19:14.092 [2024-07-24 23:04:46.511730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.092 [2024-07-24 23:04:46.511765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.351 [2024-07-24 23:04:46.523761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.351 [2024-07-24 23:04:46.523783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.351 [2024-07-24 23:04:46.535796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.351 [2024-07-24 23:04:46.535813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.351 [2024-07-24 23:04:46.547823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.351 [2024-07-24 23:04:46.547839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.351 [2024-07-24 23:04:46.559853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.352 [2024-07-24 23:04:46.559866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.352 [2024-07-24 23:04:46.571882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.352 [2024-07-24 23:04:46.571895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.352 [2024-07-24 23:04:46.583917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.352 [2024-07-24 23:04:46.583929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.352 [2024-07-24 23:04:46.595947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.352 [2024-07-24 23:04:46.595961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.352 [2024-07-24 23:04:46.607978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.352 [2024-07-24 23:04:46.607991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.352 [2024-07-24 23:04:46.620010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.352 [2024-07-24 23:04:46.620021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.352 [2024-07-24 23:04:46.632043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.352 [2024-07-24 23:04:46.632057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.352 [2024-07-24 23:04:46.644073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.352 [2024-07-24 23:04:46.644086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.352 [2024-07-24 23:04:46.656107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.352 [2024-07-24 23:04:46.656120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.352 [2024-07-24 23:04:46.668135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.352 [2024-07-24 23:04:46.668147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.352 [2024-07-24 23:04:46.680167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.352 [2024-07-24 23:04:46.680179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3221457) - No such process 00:19:14.352 23:04:46 -- target/zcopy.sh@49 -- # wait 3221457 00:19:14.352 23:04:46 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:14.352 23:04:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:14.352 23:04:46 -- common/autotest_common.sh@10 -- # set +x 00:19:14.352 23:04:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:14.352 23:04:46 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:14.352 23:04:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:14.352 23:04:46 -- common/autotest_common.sh@10 -- # set +x 00:19:14.352 delay0 00:19:14.352 23:04:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:14.352 23:04:46 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:14.352 23:04:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:14.352 23:04:46 -- common/autotest_common.sh@10 -- # set +x 00:19:14.352 23:04:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:14.352 23:04:46 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:14.352 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.611 [2024-07-24 23:04:46.852816] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:21.179 [2024-07-24 23:04:52.966029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11847c0 is same with the state(5) to be set 00:19:21.179 Initializing NVMe Controllers 00:19:21.179 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:21.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:21.179 Initialization complete. Launching workers. 00:19:21.179 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 100 00:19:21.179 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 377, failed to submit 43 00:19:21.179 success 179, unsuccess 198, failed 0 00:19:21.179 23:04:52 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:21.179 23:04:52 -- target/zcopy.sh@60 -- # nvmftestfini 00:19:21.179 23:04:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:21.179 23:04:52 -- nvmf/common.sh@116 -- # sync 00:19:21.179 23:04:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:21.179 23:04:52 -- nvmf/common.sh@119 -- # set +e 00:19:21.179 23:04:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:21.179 23:04:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:21.179 rmmod nvme_tcp 00:19:21.179 rmmod nvme_fabrics 00:19:21.179 rmmod nvme_keyring 00:19:21.179 23:04:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:21.179 23:04:53 -- nvmf/common.sh@123 -- # set -e 00:19:21.179 23:04:53 -- nvmf/common.sh@124 -- # return 0 00:19:21.179 23:04:53 -- nvmf/common.sh@477 -- # '[' -n 3219357 ']' 00:19:21.179 23:04:53 -- nvmf/common.sh@478 -- # killprocess 3219357 00:19:21.179 23:04:53 -- common/autotest_common.sh@926 -- # '[' -z 3219357 ']' 00:19:21.179 23:04:53 -- common/autotest_common.sh@930 -- # kill -0 3219357 00:19:21.179 23:04:53 -- common/autotest_common.sh@931 -- # uname 00:19:21.179 23:04:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:21.179 23:04:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3219357 00:19:21.179 23:04:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:21.179 23:04:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:21.179 23:04:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3219357' 00:19:21.179 killing process with pid 3219357 00:19:21.179 23:04:53 -- common/autotest_common.sh@945 -- # kill 3219357 00:19:21.179 23:04:53 -- common/autotest_common.sh@950 -- # wait 3219357 00:19:21.179 23:04:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:21.179 23:04:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:21.179 23:04:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:21.179 23:04:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:21.179 23:04:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:21.179 23:04:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.179 23:04:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.179 23:04:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.082 23:04:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:23.082 00:19:23.082 real 0m32.761s 00:19:23.082 user 0m42.018s 00:19:23.082 sys 0m13.156s 00:19:23.083 23:04:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:23.083 23:04:55 -- common/autotest_common.sh@10 -- # set +x 00:19:23.083 ************************************ 00:19:23.083 END TEST nvmf_zcopy 00:19:23.083 ************************************ 00:19:23.083 23:04:55 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:23.083 23:04:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:23.083 23:04:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:23.083 23:04:55 -- common/autotest_common.sh@10 -- # set +x 00:19:23.083 ************************************ 00:19:23.083 START TEST nvmf_nmic 00:19:23.083 ************************************ 00:19:23.083 23:04:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:23.083 * Looking for test storage... 00:19:23.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:23.083 23:04:55 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:23.083 23:04:55 -- nvmf/common.sh@7 -- # uname -s 00:19:23.083 23:04:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:23.083 23:04:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:23.083 23:04:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:23.083 23:04:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:23.083 23:04:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:23.083 23:04:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:23.083 23:04:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:23.083 23:04:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:23.083 23:04:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:23.083 23:04:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:23.342 23:04:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:23.342 23:04:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:23.342 23:04:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:23.342 23:04:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:23.342 23:04:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:23.342 23:04:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:23.342 23:04:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:23.342 23:04:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:23.342 23:04:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:23.342 23:04:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.342 23:04:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.342 23:04:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.342 23:04:55 -- paths/export.sh@5 -- # export PATH 00:19:23.342 23:04:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.342 23:04:55 -- nvmf/common.sh@46 -- # : 0 00:19:23.342 23:04:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:23.342 23:04:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:23.342 23:04:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:23.342 23:04:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:23.342 23:04:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:23.342 23:04:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:23.342 23:04:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:23.342 23:04:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:23.342 23:04:55 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:23.342 23:04:55 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:23.342 23:04:55 -- target/nmic.sh@14 -- # nvmftestinit 00:19:23.342 23:04:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:23.342 23:04:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:23.342 23:04:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:23.343 23:04:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:23.343 23:04:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:23.343 23:04:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.343 23:04:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.343 23:04:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.343 23:04:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:23.343 23:04:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:23.343 23:04:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:23.343 23:04:55 -- common/autotest_common.sh@10 -- # set +x 00:19:29.916 23:05:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:29.916 23:05:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:29.916 23:05:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:29.916 23:05:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:29.916 23:05:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:29.916 23:05:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:29.916 23:05:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:29.916 23:05:01 -- nvmf/common.sh@294 -- # net_devs=() 00:19:29.916 23:05:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:29.916 23:05:01 -- nvmf/common.sh@295 -- # e810=() 00:19:29.916 23:05:01 -- nvmf/common.sh@295 -- # local -ga e810 00:19:29.916 23:05:01 -- nvmf/common.sh@296 -- # x722=() 00:19:29.916 23:05:01 -- nvmf/common.sh@296 -- # local -ga x722 00:19:29.916 23:05:01 -- nvmf/common.sh@297 -- # mlx=() 00:19:29.916 23:05:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:29.916 23:05:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.916 23:05:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.916 23:05:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.916 23:05:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.916 23:05:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.916 23:05:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.917 23:05:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.917 23:05:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.917 23:05:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.917 23:05:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.917 23:05:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.917 23:05:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:29.917 23:05:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:29.917 23:05:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:29.917 23:05:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:29.917 23:05:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:29.917 23:05:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:29.917 23:05:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:29.917 23:05:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:29.917 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:29.917 23:05:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:29.917 23:05:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:29.917 23:05:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.917 23:05:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.917 23:05:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:29.917 23:05:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:29.917 23:05:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:29.917 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:29.917 23:05:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:29.917 23:05:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:29.917 23:05:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.917 23:05:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.917 23:05:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:29.917 23:05:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:29.917 23:05:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:29.917 23:05:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:29.917 23:05:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:29.917 23:05:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.917 23:05:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:29.917 23:05:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.917 23:05:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:29.917 Found net devices under 0000:af:00.0: cvl_0_0 00:19:29.917 23:05:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.917 23:05:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:29.917 23:05:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.917 23:05:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:29.917 23:05:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.917 23:05:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:29.917 Found net devices under 0000:af:00.1: cvl_0_1 00:19:29.917 23:05:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.917 23:05:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:29.917 23:05:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:29.917 23:05:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:29.917 23:05:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:29.917 23:05:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:29.917 23:05:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.917 23:05:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:29.917 23:05:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:29.917 23:05:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:29.917 23:05:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:29.917 23:05:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:29.917 23:05:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:29.917 23:05:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:29.917 23:05:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.917 23:05:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:29.917 23:05:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:29.917 23:05:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:29.917 23:05:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:29.917 23:05:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:29.917 23:05:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:29.917 23:05:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:29.917 23:05:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:29.917 23:05:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:29.917 23:05:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:29.917 23:05:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:29.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:19:29.917 00:19:29.917 --- 10.0.0.2 ping statistics --- 00:19:29.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.917 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:19:29.917 23:05:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:29.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:19:29.917 00:19:29.917 --- 10.0.0.1 ping statistics --- 00:19:29.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.917 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:19:29.917 23:05:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.917 23:05:01 -- nvmf/common.sh@410 -- # return 0 00:19:29.917 23:05:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:29.917 23:05:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.917 23:05:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:29.917 23:05:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:29.917 23:05:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.917 23:05:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:29.917 23:05:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:29.917 23:05:01 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:29.917 23:05:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:29.917 23:05:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:29.917 23:05:01 -- common/autotest_common.sh@10 -- # set +x 00:19:29.917 23:05:01 -- nvmf/common.sh@469 -- # nvmfpid=3227034 00:19:29.917 23:05:01 -- nvmf/common.sh@470 -- # waitforlisten 3227034 00:19:29.917 23:05:01 -- common/autotest_common.sh@819 -- # '[' -z 3227034 ']' 00:19:29.917 23:05:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.917 23:05:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:29.917 23:05:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.917 23:05:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:29.917 23:05:01 -- common/autotest_common.sh@10 -- # set +x 00:19:29.917 23:05:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:29.917 [2024-07-24 23:05:02.036239] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:29.917 [2024-07-24 23:05:02.036287] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.917 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.917 [2024-07-24 23:05:02.110974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:29.917 [2024-07-24 23:05:02.151164] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:29.917 [2024-07-24 23:05:02.151270] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.917 [2024-07-24 23:05:02.151280] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.917 [2024-07-24 23:05:02.151289] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.917 [2024-07-24 23:05:02.151374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.917 [2024-07-24 23:05:02.151392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.917 [2024-07-24 23:05:02.151477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:29.917 [2024-07-24 23:05:02.151479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.485 23:05:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:30.485 23:05:02 -- common/autotest_common.sh@852 -- # return 0 00:19:30.485 23:05:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:30.485 23:05:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:30.485 23:05:02 -- common/autotest_common.sh@10 -- # set +x 00:19:30.485 23:05:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.485 23:05:02 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:30.485 23:05:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.485 23:05:02 -- common/autotest_common.sh@10 -- # set +x 00:19:30.485 [2024-07-24 23:05:02.887083] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.485 23:05:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.485 23:05:02 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:30.485 23:05:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.485 23:05:02 -- common/autotest_common.sh@10 -- # set +x 00:19:30.779 Malloc0 00:19:30.779 23:05:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.779 23:05:02 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:30.779 23:05:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.779 23:05:02 -- common/autotest_common.sh@10 -- # set +x 00:19:30.779 23:05:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.779 23:05:02 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:30.779 23:05:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.779 23:05:02 -- common/autotest_common.sh@10 -- # set +x 00:19:30.779 23:05:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.779 23:05:02 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:30.779 23:05:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.779 23:05:02 -- common/autotest_common.sh@10 -- # set +x 00:19:30.779 [2024-07-24 23:05:02.945557] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.779 23:05:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.779 23:05:02 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:30.779 test case1: single bdev can't be used in multiple subsystems 00:19:30.779 23:05:02 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:30.779 23:05:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.779 23:05:02 -- common/autotest_common.sh@10 -- # set +x 00:19:30.779 23:05:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.779 23:05:02 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:30.779 23:05:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.779 23:05:02 -- common/autotest_common.sh@10 -- # set +x 00:19:30.779 23:05:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.779 23:05:02 -- target/nmic.sh@28 -- # nmic_status=0 00:19:30.779 23:05:02 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:30.779 23:05:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.779 23:05:02 -- common/autotest_common.sh@10 -- # set +x 00:19:30.779 [2024-07-24 23:05:02.973479] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:30.779 [2024-07-24 23:05:02.973499] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:30.780 [2024-07-24 23:05:02.973509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.780 request: 00:19:30.780 { 00:19:30.780 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:30.780 "namespace": { 00:19:30.780 "bdev_name": "Malloc0" 00:19:30.780 }, 00:19:30.780 "method": "nvmf_subsystem_add_ns", 00:19:30.780 "req_id": 1 00:19:30.780 } 00:19:30.780 Got JSON-RPC error response 00:19:30.780 response: 00:19:30.780 { 00:19:30.780 "code": -32602, 00:19:30.780 "message": "Invalid parameters" 00:19:30.780 } 00:19:30.780 23:05:02 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:30.780 23:05:02 -- target/nmic.sh@29 -- # nmic_status=1 00:19:30.780 23:05:02 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:30.780 23:05:02 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:30.780 Adding namespace failed - expected result. 00:19:30.780 23:05:02 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:30.780 test case2: host connect to nvmf target in multiple paths 00:19:30.780 23:05:02 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:30.780 23:05:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.780 23:05:02 -- common/autotest_common.sh@10 -- # set +x 00:19:30.780 [2024-07-24 23:05:02.985645] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:30.780 23:05:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.780 23:05:02 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:32.158 23:05:04 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:33.544 23:05:05 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:33.544 23:05:05 -- common/autotest_common.sh@1177 -- # local i=0 00:19:33.544 23:05:05 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:33.544 23:05:05 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:33.544 23:05:05 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:35.450 23:05:07 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:35.450 23:05:07 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:35.450 23:05:07 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:35.450 23:05:07 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:35.450 23:05:07 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:35.450 23:05:07 -- common/autotest_common.sh@1187 -- # return 0 00:19:35.450 23:05:07 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:35.450 [global] 00:19:35.450 thread=1 00:19:35.450 invalidate=1 00:19:35.450 rw=write 00:19:35.450 time_based=1 00:19:35.450 runtime=1 00:19:35.450 ioengine=libaio 00:19:35.450 direct=1 00:19:35.450 bs=4096 00:19:35.450 iodepth=1 00:19:35.450 norandommap=0 00:19:35.450 numjobs=1 00:19:35.450 00:19:35.450 verify_dump=1 00:19:35.450 verify_backlog=512 00:19:35.450 verify_state_save=0 00:19:35.450 do_verify=1 00:19:35.450 verify=crc32c-intel 00:19:35.450 [job0] 00:19:35.450 filename=/dev/nvme0n1 00:19:35.450 Could not set queue depth (nvme0n1) 00:19:35.709 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:35.709 fio-3.35 00:19:35.709 Starting 1 thread 00:19:37.090 00:19:37.090 job0: (groupid=0, jobs=1): err= 0: pid=3228222: Wed Jul 24 23:05:09 2024 00:19:37.090 read: IOPS=1240, BW=4963KiB/s (5082kB/s)(4968KiB/1001msec) 00:19:37.090 slat (nsec): min=8749, max=44268, avg=9267.24, stdev=1169.90 00:19:37.090 clat (usec): min=358, max=1993, avg=486.69, stdev=50.02 00:19:37.090 lat (usec): min=368, max=2002, avg=495.96, stdev=50.00 00:19:37.090 clat percentiles (usec): 00:19:37.090 | 1.00th=[ 383], 5.00th=[ 449], 10.00th=[ 461], 20.00th=[ 469], 00:19:37.090 | 30.00th=[ 486], 40.00th=[ 490], 50.00th=[ 494], 60.00th=[ 494], 00:19:37.090 | 70.00th=[ 498], 80.00th=[ 498], 90.00th=[ 506], 95.00th=[ 510], 00:19:37.090 | 99.00th=[ 537], 99.50th=[ 578], 99.90th=[ 676], 99.95th=[ 1991], 00:19:37.090 | 99.99th=[ 1991] 00:19:37.090 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:37.090 slat (nsec): min=11536, max=42532, avg=12571.03, stdev=1648.93 00:19:37.090 clat (usec): min=188, max=530, avg=232.35, stdev=32.58 00:19:37.090 lat (usec): min=200, max=572, avg=244.92, stdev=32.82 00:19:37.090 clat percentiles (usec): 00:19:37.090 | 1.00th=[ 192], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 202], 00:19:37.090 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 225], 60.00th=[ 231], 00:19:37.090 | 70.00th=[ 247], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 285], 00:19:37.090 | 99.00th=[ 293], 99.50th=[ 297], 99.90th=[ 367], 99.95th=[ 529], 00:19:37.090 | 99.99th=[ 529] 00:19:37.090 bw ( KiB/s): min= 7560, max= 7560, per=100.00%, avg=7560.00, stdev= 0.00, samples=1 00:19:37.090 iops : min= 1890, max= 1890, avg=1890.00, stdev= 0.00, samples=1 00:19:37.090 lat (usec) : 250=39.85%, 500=51.66%, 750=8.46% 00:19:37.090 lat (msec) : 2=0.04% 00:19:37.090 cpu : usr=2.20%, sys=5.20%, ctx=2778, majf=0, minf=2 00:19:37.090 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:37.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.090 issued rwts: total=1242,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.090 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:37.090 00:19:37.090 Run status group 0 (all jobs): 00:19:37.090 READ: bw=4963KiB/s (5082kB/s), 4963KiB/s-4963KiB/s (5082kB/s-5082kB/s), io=4968KiB (5087kB), run=1001-1001msec 00:19:37.090 WRITE: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:19:37.090 00:19:37.090 Disk stats (read/write): 00:19:37.090 nvme0n1: ios=1094/1536, merge=0/0, ticks=539/339, in_queue=878, util=91.68% 00:19:37.090 23:05:09 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:37.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:37.090 23:05:09 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:37.090 23:05:09 -- common/autotest_common.sh@1198 -- # local i=0 00:19:37.090 23:05:09 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:37.091 23:05:09 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:37.091 23:05:09 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:37.091 23:05:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:37.091 23:05:09 -- common/autotest_common.sh@1210 -- # return 0 00:19:37.091 23:05:09 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:37.091 23:05:09 -- target/nmic.sh@53 -- # nvmftestfini 00:19:37.091 23:05:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:37.091 23:05:09 -- nvmf/common.sh@116 -- # sync 00:19:37.091 23:05:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:37.091 23:05:09 -- nvmf/common.sh@119 -- # set +e 00:19:37.091 23:05:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:37.091 23:05:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:37.091 rmmod nvme_tcp 00:19:37.091 rmmod nvme_fabrics 00:19:37.091 rmmod nvme_keyring 00:19:37.091 23:05:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:37.091 23:05:09 -- nvmf/common.sh@123 -- # set -e 00:19:37.091 23:05:09 -- nvmf/common.sh@124 -- # return 0 00:19:37.091 23:05:09 -- nvmf/common.sh@477 -- # '[' -n 3227034 ']' 00:19:37.091 23:05:09 -- nvmf/common.sh@478 -- # killprocess 3227034 00:19:37.091 23:05:09 -- common/autotest_common.sh@926 -- # '[' -z 3227034 ']' 00:19:37.091 23:05:09 -- common/autotest_common.sh@930 -- # kill -0 3227034 00:19:37.350 23:05:09 -- common/autotest_common.sh@931 -- # uname 00:19:37.350 23:05:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:37.350 23:05:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3227034 00:19:37.350 23:05:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:37.350 23:05:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:37.350 23:05:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3227034' 00:19:37.350 killing process with pid 3227034 00:19:37.350 23:05:09 -- common/autotest_common.sh@945 -- # kill 3227034 00:19:37.350 23:05:09 -- common/autotest_common.sh@950 -- # wait 3227034 00:19:37.350 23:05:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:37.350 23:05:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:37.350 23:05:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:37.350 23:05:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:37.350 23:05:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:37.350 23:05:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.350 23:05:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:37.350 23:05:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.890 23:05:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:39.890 00:19:39.890 real 0m16.452s 00:19:39.890 user 0m40.039s 00:19:39.890 sys 0m6.102s 00:19:39.890 23:05:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:39.890 23:05:11 -- common/autotest_common.sh@10 -- # set +x 00:19:39.890 ************************************ 00:19:39.890 END TEST nvmf_nmic 00:19:39.890 ************************************ 00:19:39.890 23:05:11 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:39.890 23:05:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:39.890 23:05:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:39.890 23:05:11 -- common/autotest_common.sh@10 -- # set +x 00:19:39.890 ************************************ 00:19:39.890 START TEST nvmf_fio_target 00:19:39.890 ************************************ 00:19:39.890 23:05:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:39.890 * Looking for test storage... 00:19:39.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:39.890 23:05:12 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.890 23:05:12 -- nvmf/common.sh@7 -- # uname -s 00:19:39.890 23:05:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.890 23:05:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.890 23:05:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.890 23:05:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.890 23:05:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.890 23:05:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.890 23:05:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.890 23:05:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.890 23:05:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.890 23:05:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.890 23:05:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:39.890 23:05:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:39.890 23:05:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.890 23:05:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.890 23:05:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.890 23:05:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:39.890 23:05:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.890 23:05:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.890 23:05:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.890 23:05:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.890 23:05:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.890 23:05:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.890 23:05:12 -- paths/export.sh@5 -- # export PATH 00:19:39.891 23:05:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.891 23:05:12 -- nvmf/common.sh@46 -- # : 0 00:19:39.891 23:05:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:39.891 23:05:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:39.891 23:05:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:39.891 23:05:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.891 23:05:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.891 23:05:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:39.891 23:05:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:39.891 23:05:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:39.891 23:05:12 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:39.891 23:05:12 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:39.891 23:05:12 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:39.891 23:05:12 -- target/fio.sh@16 -- # nvmftestinit 00:19:39.891 23:05:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:39.891 23:05:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.891 23:05:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:39.891 23:05:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:39.891 23:05:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:39.891 23:05:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.891 23:05:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.891 23:05:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.891 23:05:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:39.891 23:05:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:39.891 23:05:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:39.891 23:05:12 -- common/autotest_common.sh@10 -- # set +x 00:19:46.464 23:05:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:46.464 23:05:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:46.464 23:05:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:46.464 23:05:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:46.464 23:05:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:46.464 23:05:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:46.464 23:05:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:46.464 23:05:18 -- nvmf/common.sh@294 -- # net_devs=() 00:19:46.464 23:05:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:46.464 23:05:18 -- nvmf/common.sh@295 -- # e810=() 00:19:46.464 23:05:18 -- nvmf/common.sh@295 -- # local -ga e810 00:19:46.464 23:05:18 -- nvmf/common.sh@296 -- # x722=() 00:19:46.464 23:05:18 -- nvmf/common.sh@296 -- # local -ga x722 00:19:46.464 23:05:18 -- nvmf/common.sh@297 -- # mlx=() 00:19:46.464 23:05:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:46.464 23:05:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:46.464 23:05:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:46.464 23:05:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:46.464 23:05:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:46.464 23:05:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:46.464 23:05:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:46.464 23:05:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:46.464 23:05:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:46.464 23:05:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:46.464 23:05:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:46.464 23:05:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:46.464 23:05:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:46.464 23:05:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:46.464 23:05:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:46.464 23:05:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:46.464 23:05:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:46.464 23:05:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:46.464 23:05:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:46.464 23:05:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:46.464 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:46.464 23:05:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:46.464 23:05:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:46.464 23:05:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.464 23:05:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.464 23:05:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:46.464 23:05:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:46.464 23:05:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:46.464 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:46.464 23:05:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:46.464 23:05:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:46.464 23:05:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.464 23:05:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.464 23:05:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:46.464 23:05:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:46.464 23:05:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:46.464 23:05:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:46.464 23:05:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:46.464 23:05:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.464 23:05:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:46.464 23:05:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.464 23:05:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:46.464 Found net devices under 0000:af:00.0: cvl_0_0 00:19:46.464 23:05:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.464 23:05:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:46.464 23:05:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.464 23:05:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:46.464 23:05:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.464 23:05:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:46.464 Found net devices under 0000:af:00.1: cvl_0_1 00:19:46.464 23:05:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.464 23:05:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:46.464 23:05:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:46.464 23:05:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:46.464 23:05:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:46.464 23:05:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:46.464 23:05:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:46.465 23:05:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.465 23:05:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:46.465 23:05:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:46.465 23:05:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:46.465 23:05:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:46.465 23:05:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:46.465 23:05:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:46.465 23:05:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.465 23:05:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:46.465 23:05:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:46.465 23:05:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:46.465 23:05:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:46.465 23:05:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:46.465 23:05:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:46.465 23:05:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:46.465 23:05:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:46.465 23:05:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:46.465 23:05:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:46.465 23:05:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:46.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:19:46.465 00:19:46.465 --- 10.0.0.2 ping statistics --- 00:19:46.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.465 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:19:46.465 23:05:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:46.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:19:46.465 00:19:46.465 --- 10.0.0.1 ping statistics --- 00:19:46.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.465 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:19:46.465 23:05:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.465 23:05:18 -- nvmf/common.sh@410 -- # return 0 00:19:46.465 23:05:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:46.465 23:05:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.465 23:05:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:46.465 23:05:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:46.465 23:05:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.465 23:05:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:46.465 23:05:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:46.465 23:05:18 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:46.465 23:05:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:46.465 23:05:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:46.465 23:05:18 -- common/autotest_common.sh@10 -- # set +x 00:19:46.465 23:05:18 -- nvmf/common.sh@469 -- # nvmfpid=3231991 00:19:46.465 23:05:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:46.465 23:05:18 -- nvmf/common.sh@470 -- # waitforlisten 3231991 00:19:46.465 23:05:18 -- common/autotest_common.sh@819 -- # '[' -z 3231991 ']' 00:19:46.465 23:05:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.465 23:05:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:46.465 23:05:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.465 23:05:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:46.465 23:05:18 -- common/autotest_common.sh@10 -- # set +x 00:19:46.465 [2024-07-24 23:05:18.651012] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:46.465 [2024-07-24 23:05:18.651061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.465 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.465 [2024-07-24 23:05:18.726525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:46.465 [2024-07-24 23:05:18.765589] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:46.465 [2024-07-24 23:05:18.765699] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.465 [2024-07-24 23:05:18.765712] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.465 [2024-07-24 23:05:18.765730] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.465 [2024-07-24 23:05:18.765775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.465 [2024-07-24 23:05:18.765873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.465 [2024-07-24 23:05:18.765965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:46.465 [2024-07-24 23:05:18.765967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.034 23:05:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:47.034 23:05:19 -- common/autotest_common.sh@852 -- # return 0 00:19:47.034 23:05:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:47.034 23:05:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:47.034 23:05:19 -- common/autotest_common.sh@10 -- # set +x 00:19:47.293 23:05:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.293 23:05:19 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:47.293 [2024-07-24 23:05:19.644581] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.293 23:05:19 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:47.552 23:05:19 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:47.552 23:05:19 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:47.811 23:05:20 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:47.811 23:05:20 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:47.811 23:05:20 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:47.811 23:05:20 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:48.071 23:05:20 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:48.071 23:05:20 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:48.331 23:05:20 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:48.590 23:05:20 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:48.590 23:05:20 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:48.590 23:05:20 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:48.590 23:05:20 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:48.849 23:05:21 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:48.849 23:05:21 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:49.108 23:05:21 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:49.108 23:05:21 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:49.108 23:05:21 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:49.368 23:05:21 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:49.368 23:05:21 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:49.671 23:05:21 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:49.671 [2024-07-24 23:05:22.005449] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.671 23:05:22 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:49.957 23:05:22 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:49.957 23:05:22 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:51.337 23:05:23 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:51.337 23:05:23 -- common/autotest_common.sh@1177 -- # local i=0 00:19:51.337 23:05:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:51.337 23:05:23 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:19:51.337 23:05:23 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:19:51.337 23:05:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:53.243 23:05:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:53.243 23:05:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:53.243 23:05:25 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:53.503 23:05:25 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:19:53.503 23:05:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:53.503 23:05:25 -- common/autotest_common.sh@1187 -- # return 0 00:19:53.503 23:05:25 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:53.503 [global] 00:19:53.503 thread=1 00:19:53.503 invalidate=1 00:19:53.503 rw=write 00:19:53.503 time_based=1 00:19:53.503 runtime=1 00:19:53.503 ioengine=libaio 00:19:53.503 direct=1 00:19:53.503 bs=4096 00:19:53.503 iodepth=1 00:19:53.503 norandommap=0 00:19:53.503 numjobs=1 00:19:53.503 00:19:53.503 verify_dump=1 00:19:53.503 verify_backlog=512 00:19:53.503 verify_state_save=0 00:19:53.503 do_verify=1 00:19:53.503 verify=crc32c-intel 00:19:53.503 [job0] 00:19:53.503 filename=/dev/nvme0n1 00:19:53.503 [job1] 00:19:53.503 filename=/dev/nvme0n2 00:19:53.503 [job2] 00:19:53.503 filename=/dev/nvme0n3 00:19:53.503 [job3] 00:19:53.503 filename=/dev/nvme0n4 00:19:53.503 Could not set queue depth (nvme0n1) 00:19:53.503 Could not set queue depth (nvme0n2) 00:19:53.503 Could not set queue depth (nvme0n3) 00:19:53.503 Could not set queue depth (nvme0n4) 00:19:53.762 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:53.762 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:53.762 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:53.762 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:53.762 fio-3.35 00:19:53.762 Starting 4 threads 00:19:55.142 00:19:55.142 job0: (groupid=0, jobs=1): err= 0: pid=3233536: Wed Jul 24 23:05:27 2024 00:19:55.142 read: IOPS=21, BW=85.2KiB/s (87.2kB/s)(88.0KiB/1033msec) 00:19:55.142 slat (nsec): min=11792, max=28761, avg=23295.32, stdev=3125.08 00:19:55.142 clat (usec): min=40850, max=41665, avg=41035.22, stdev=189.43 00:19:55.142 lat (usec): min=40872, max=41687, avg=41058.52, stdev=187.75 00:19:55.142 clat percentiles (usec): 00:19:55.142 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:19:55.142 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:55.142 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:55.142 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:55.142 | 99.99th=[41681] 00:19:55.142 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:19:55.142 slat (nsec): min=11844, max=68386, avg=13053.69, stdev=2929.49 00:19:55.142 clat (usec): min=202, max=409, avg=236.69, stdev=21.12 00:19:55.142 lat (usec): min=214, max=477, avg=249.75, stdev=22.39 00:19:55.142 clat percentiles (usec): 00:19:55.142 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 221], 00:19:55.142 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 239], 00:19:55.142 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 273], 00:19:55.142 | 99.00th=[ 297], 99.50th=[ 322], 99.90th=[ 408], 99.95th=[ 408], 00:19:55.142 | 99.99th=[ 408] 00:19:55.142 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:19:55.142 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:55.142 lat (usec) : 250=75.09%, 500=20.79% 00:19:55.142 lat (msec) : 50=4.12% 00:19:55.142 cpu : usr=1.16%, sys=0.29%, ctx=535, majf=0, minf=1 00:19:55.142 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.142 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.142 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:55.142 job1: (groupid=0, jobs=1): err= 0: pid=3233537: Wed Jul 24 23:05:27 2024 00:19:55.142 read: IOPS=21, BW=84.7KiB/s (86.7kB/s)(88.0KiB/1039msec) 00:19:55.142 slat (nsec): min=11688, max=26780, avg=24917.45, stdev=3099.95 00:19:55.142 clat (usec): min=40869, max=41361, avg=40986.69, stdev=103.15 00:19:55.142 lat (usec): min=40895, max=41373, avg=41011.61, stdev=100.90 00:19:55.142 clat percentiles (usec): 00:19:55.142 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:19:55.142 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:55.142 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:55.142 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:55.142 | 99.99th=[41157] 00:19:55.142 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:19:55.142 slat (usec): min=9, max=1922, avg=18.45, stdev=84.34 00:19:55.142 clat (usec): min=205, max=617, avg=244.07, stdev=29.77 00:19:55.142 lat (usec): min=220, max=2357, avg=262.52, stdev=97.20 00:19:55.142 clat percentiles (usec): 00:19:55.142 | 1.00th=[ 210], 5.00th=[ 221], 10.00th=[ 223], 20.00th=[ 227], 00:19:55.142 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 245], 00:19:55.142 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 281], 00:19:55.142 | 99.00th=[ 318], 99.50th=[ 437], 99.90th=[ 619], 99.95th=[ 619], 00:19:55.142 | 99.99th=[ 619] 00:19:55.142 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:19:55.142 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:55.142 lat (usec) : 250=66.29%, 500=29.40%, 750=0.19% 00:19:55.142 lat (msec) : 50=4.12% 00:19:55.142 cpu : usr=0.87%, sys=0.67%, ctx=536, majf=0, minf=2 00:19:55.142 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.142 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.142 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:55.142 job2: (groupid=0, jobs=1): err= 0: pid=3233538: Wed Jul 24 23:05:27 2024 00:19:55.142 read: IOPS=21, BW=85.0KiB/s (87.1kB/s)(88.0KiB/1035msec) 00:19:55.142 slat (nsec): min=11565, max=27037, avg=23200.09, stdev=2842.95 00:19:55.142 clat (usec): min=40848, max=41628, avg=41034.63, stdev=204.82 00:19:55.142 lat (usec): min=40872, max=41650, avg=41057.83, stdev=203.17 00:19:55.142 clat percentiles (usec): 00:19:55.142 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:55.142 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:55.142 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:19:55.142 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:55.142 | 99.99th=[41681] 00:19:55.142 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:19:55.142 slat (nsec): min=9543, max=34406, avg=13216.30, stdev=2133.76 00:19:55.142 clat (usec): min=205, max=417, avg=239.97, stdev=19.68 00:19:55.142 lat (usec): min=217, max=451, avg=253.18, stdev=20.20 00:19:55.142 clat percentiles (usec): 00:19:55.142 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:19:55.142 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 243], 00:19:55.142 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:19:55.142 | 99.00th=[ 285], 99.50th=[ 306], 99.90th=[ 416], 99.95th=[ 416], 00:19:55.142 | 99.99th=[ 416] 00:19:55.142 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:19:55.142 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:55.142 lat (usec) : 250=70.60%, 500=25.28% 00:19:55.142 lat (msec) : 50=4.12% 00:19:55.142 cpu : usr=0.97%, sys=0.39%, ctx=534, majf=0, minf=1 00:19:55.142 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.142 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.142 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:55.142 job3: (groupid=0, jobs=1): err= 0: pid=3233539: Wed Jul 24 23:05:27 2024 00:19:55.142 read: IOPS=20, BW=81.6KiB/s (83.5kB/s)(84.0KiB/1030msec) 00:19:55.142 slat (nsec): min=11544, max=27114, avg=24202.19, stdev=2998.80 00:19:55.142 clat (usec): min=40675, max=41071, avg=40951.86, stdev=82.13 00:19:55.142 lat (usec): min=40687, max=41096, avg=40976.06, stdev=84.18 00:19:55.142 clat percentiles (usec): 00:19:55.142 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:55.142 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:55.142 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:55.142 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:55.142 | 99.99th=[41157] 00:19:55.142 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:19:55.142 slat (nsec): min=10689, max=38089, avg=13922.26, stdev=2143.44 00:19:55.142 clat (usec): min=215, max=1711, avg=312.80, stdev=106.36 00:19:55.142 lat (usec): min=227, max=1727, avg=326.72, stdev=107.04 00:19:55.142 clat percentiles (usec): 00:19:55.142 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 233], 20.00th=[ 249], 00:19:55.142 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 297], 60.00th=[ 314], 00:19:55.142 | 70.00th=[ 334], 80.00th=[ 355], 90.00th=[ 383], 95.00th=[ 474], 00:19:55.142 | 99.00th=[ 594], 99.50th=[ 922], 99.90th=[ 1713], 99.95th=[ 1713], 00:19:55.142 | 99.99th=[ 1713] 00:19:55.142 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:19:55.142 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:55.142 lat (usec) : 250=20.45%, 500=71.86%, 750=3.00%, 1000=0.38% 00:19:55.142 lat (msec) : 2=0.38%, 50=3.94% 00:19:55.142 cpu : usr=0.29%, sys=1.26%, ctx=536, majf=0, minf=1 00:19:55.142 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.142 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.142 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:55.142 00:19:55.142 Run status group 0 (all jobs): 00:19:55.142 READ: bw=335KiB/s (343kB/s), 81.6KiB/s-85.2KiB/s (83.5kB/s-87.2kB/s), io=348KiB (356kB), run=1030-1039msec 00:19:55.142 WRITE: bw=7885KiB/s (8074kB/s), 1971KiB/s-1988KiB/s (2018kB/s-2036kB/s), io=8192KiB (8389kB), run=1030-1039msec 00:19:55.142 00:19:55.142 Disk stats (read/write): 00:19:55.142 nvme0n1: ios=67/512, merge=0/0, ticks=735/111, in_queue=846, util=84.97% 00:19:55.142 nvme0n2: ios=67/512, merge=0/0, ticks=818/122, in_queue=940, util=87.31% 00:19:55.142 nvme0n3: ios=73/512, merge=0/0, ticks=764/109, in_queue=873, util=93.09% 00:19:55.142 nvme0n4: ios=39/512, merge=0/0, ticks=1560/149, in_queue=1709, util=94.17% 00:19:55.142 23:05:27 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:55.142 [global] 00:19:55.142 thread=1 00:19:55.142 invalidate=1 00:19:55.142 rw=randwrite 00:19:55.142 time_based=1 00:19:55.142 runtime=1 00:19:55.142 ioengine=libaio 00:19:55.142 direct=1 00:19:55.142 bs=4096 00:19:55.142 iodepth=1 00:19:55.142 norandommap=0 00:19:55.142 numjobs=1 00:19:55.142 00:19:55.142 verify_dump=1 00:19:55.142 verify_backlog=512 00:19:55.142 verify_state_save=0 00:19:55.142 do_verify=1 00:19:55.142 verify=crc32c-intel 00:19:55.142 [job0] 00:19:55.142 filename=/dev/nvme0n1 00:19:55.142 [job1] 00:19:55.142 filename=/dev/nvme0n2 00:19:55.143 [job2] 00:19:55.143 filename=/dev/nvme0n3 00:19:55.143 [job3] 00:19:55.143 filename=/dev/nvme0n4 00:19:55.143 Could not set queue depth (nvme0n1) 00:19:55.143 Could not set queue depth (nvme0n2) 00:19:55.143 Could not set queue depth (nvme0n3) 00:19:55.143 Could not set queue depth (nvme0n4) 00:19:55.401 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:55.401 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:55.401 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:55.401 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:55.401 fio-3.35 00:19:55.401 Starting 4 threads 00:19:56.779 00:19:56.779 job0: (groupid=0, jobs=1): err= 0: pid=3233968: Wed Jul 24 23:05:28 2024 00:19:56.779 read: IOPS=27, BW=111KiB/s (113kB/s)(112KiB/1011msec) 00:19:56.779 slat (nsec): min=9319, max=25912, avg=20838.79, stdev=6663.95 00:19:56.779 clat (usec): min=460, max=41983, avg=30971.86, stdev=17880.07 00:19:56.779 lat (usec): min=476, max=42008, avg=30992.70, stdev=17886.01 00:19:56.779 clat percentiles (usec): 00:19:56.779 | 1.00th=[ 461], 5.00th=[ 482], 10.00th=[ 494], 20.00th=[ 693], 00:19:56.779 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:56.779 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:19:56.779 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:56.779 | 99.99th=[42206] 00:19:56.779 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:19:56.779 slat (nsec): min=11240, max=40173, avg=12545.93, stdev=1582.40 00:19:56.779 clat (usec): min=199, max=478, avg=264.56, stdev=26.16 00:19:56.779 lat (usec): min=211, max=518, avg=277.11, stdev=26.67 00:19:56.779 clat percentiles (usec): 00:19:56.779 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 221], 20.00th=[ 245], 00:19:56.779 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 277], 00:19:56.779 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 285], 95.00th=[ 285], 00:19:56.779 | 99.00th=[ 302], 99.50th=[ 334], 99.90th=[ 478], 99.95th=[ 478], 00:19:56.779 | 99.99th=[ 478] 00:19:56.779 bw ( KiB/s): min= 4096, max= 4096, per=28.89%, avg=4096.00, stdev= 0.00, samples=1 00:19:56.779 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:56.779 lat (usec) : 250=22.41%, 500=72.96%, 750=0.74% 00:19:56.779 lat (msec) : 50=3.89% 00:19:56.779 cpu : usr=0.40%, sys=0.59%, ctx=542, majf=0, minf=1 00:19:56.779 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.779 issued rwts: total=28,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.779 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:56.779 job1: (groupid=0, jobs=1): err= 0: pid=3233969: Wed Jul 24 23:05:28 2024 00:19:56.779 read: IOPS=520, BW=2081KiB/s (2131kB/s)(2104KiB/1011msec) 00:19:56.779 slat (nsec): min=8616, max=25830, avg=9726.79, stdev=2283.86 00:19:56.779 clat (usec): min=252, max=42024, avg=1426.18, stdev=6437.33 00:19:56.779 lat (usec): min=261, max=42049, avg=1435.90, stdev=6439.33 00:19:56.779 clat percentiles (usec): 00:19:56.780 | 1.00th=[ 269], 5.00th=[ 293], 10.00th=[ 314], 20.00th=[ 343], 00:19:56.780 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 375], 00:19:56.780 | 70.00th=[ 379], 80.00th=[ 388], 90.00th=[ 433], 95.00th=[ 457], 00:19:56.780 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:19:56.780 | 99.99th=[42206] 00:19:56.780 write: IOPS=1012, BW=4051KiB/s (4149kB/s)(4096KiB/1011msec); 0 zone resets 00:19:56.780 slat (nsec): min=11680, max=37170, avg=12864.53, stdev=1895.74 00:19:56.780 clat (usec): min=175, max=539, avg=232.91, stdev=33.94 00:19:56.780 lat (usec): min=187, max=576, avg=245.77, stdev=34.44 00:19:56.780 clat percentiles (usec): 00:19:56.780 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 206], 00:19:56.780 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 229], 60.00th=[ 237], 00:19:56.780 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 289], 00:19:56.780 | 99.00th=[ 326], 99.50th=[ 371], 99.90th=[ 445], 99.95th=[ 537], 00:19:56.780 | 99.99th=[ 537] 00:19:56.780 bw ( KiB/s): min= 8192, max= 8192, per=57.77%, avg=8192.00, stdev= 0.00, samples=1 00:19:56.780 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:56.780 lat (usec) : 250=49.35%, 500=49.35%, 750=0.39% 00:19:56.780 lat (msec) : 50=0.90% 00:19:56.780 cpu : usr=0.89%, sys=1.98%, ctx=1551, majf=0, minf=2 00:19:56.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.780 issued rwts: total=526,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:56.780 job2: (groupid=0, jobs=1): err= 0: pid=3233970: Wed Jul 24 23:05:28 2024 00:19:56.780 read: IOPS=29, BW=119KiB/s (122kB/s)(120KiB/1005msec) 00:19:56.780 slat (nsec): min=9757, max=26849, avg=19556.80, stdev=6620.37 00:19:56.780 clat (usec): min=358, max=41939, avg=28867.16, stdev=18917.00 00:19:56.780 lat (usec): min=368, max=41949, avg=28886.71, stdev=18919.60 00:19:56.780 clat percentiles (usec): 00:19:56.780 | 1.00th=[ 359], 5.00th=[ 363], 10.00th=[ 388], 20.00th=[ 433], 00:19:56.780 | 30.00th=[ 717], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:56.780 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:56.780 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:56.780 | 99.99th=[41681] 00:19:56.780 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:19:56.780 slat (nsec): min=12635, max=78902, avg=13972.63, stdev=3327.59 00:19:56.780 clat (usec): min=208, max=575, avg=252.50, stdev=32.96 00:19:56.780 lat (usec): min=221, max=654, avg=266.47, stdev=34.43 00:19:56.780 clat percentiles (usec): 00:19:56.780 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 233], 00:19:56.780 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:19:56.780 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 302], 00:19:56.780 | 99.00th=[ 375], 99.50th=[ 429], 99.90th=[ 578], 99.95th=[ 578], 00:19:56.780 | 99.99th=[ 578] 00:19:56.780 bw ( KiB/s): min= 4096, max= 4096, per=28.89%, avg=4096.00, stdev= 0.00, samples=1 00:19:56.780 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:56.780 lat (usec) : 250=52.77%, 500=42.62%, 750=0.74% 00:19:56.780 lat (msec) : 50=3.87% 00:19:56.780 cpu : usr=0.70%, sys=0.80%, ctx=543, majf=0, minf=1 00:19:56.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.780 issued rwts: total=30,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:56.780 job3: (groupid=0, jobs=1): err= 0: pid=3233971: Wed Jul 24 23:05:28 2024 00:19:56.780 read: IOPS=1021, BW=4087KiB/s (4186kB/s)(4112KiB/1006msec) 00:19:56.780 slat (nsec): min=8693, max=44236, avg=9729.21, stdev=1893.26 00:19:56.780 clat (usec): min=268, max=41593, avg=610.20, stdev=2534.99 00:19:56.780 lat (usec): min=280, max=41604, avg=619.93, stdev=2535.67 00:19:56.780 clat percentiles (usec): 00:19:56.780 | 1.00th=[ 302], 5.00th=[ 343], 10.00th=[ 367], 20.00th=[ 404], 00:19:56.780 | 30.00th=[ 441], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 482], 00:19:56.780 | 70.00th=[ 486], 80.00th=[ 490], 90.00th=[ 498], 95.00th=[ 506], 00:19:56.780 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[41157], 99.95th=[41681], 00:19:56.780 | 99.99th=[41681] 00:19:56.780 write: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec); 0 zone resets 00:19:56.780 slat (nsec): min=11676, max=50628, avg=12803.50, stdev=2109.66 00:19:56.780 clat (usec): min=174, max=409, avg=222.56, stdev=24.15 00:19:56.780 lat (usec): min=186, max=447, avg=235.37, stdev=24.64 00:19:56.780 clat percentiles (usec): 00:19:56.780 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 202], 00:19:56.780 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 227], 00:19:56.780 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 260], 00:19:56.780 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 396], 99.95th=[ 408], 00:19:56.780 | 99.99th=[ 408] 00:19:56.780 bw ( KiB/s): min= 4096, max= 8192, per=43.33%, avg=6144.00, stdev=2896.31, samples=2 00:19:56.780 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:19:56.780 lat (usec) : 250=52.73%, 500=43.92%, 750=3.20% 00:19:56.780 lat (msec) : 50=0.16% 00:19:56.780 cpu : usr=1.89%, sys=4.48%, ctx=2564, majf=0, minf=1 00:19:56.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.780 issued rwts: total=1028,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:56.780 00:19:56.780 Run status group 0 (all jobs): 00:19:56.780 READ: bw=6378KiB/s (6531kB/s), 111KiB/s-4087KiB/s (113kB/s-4186kB/s), io=6448KiB (6603kB), run=1005-1011msec 00:19:56.780 WRITE: bw=13.8MiB/s (14.5MB/s), 2026KiB/s-6107KiB/s (2074kB/s-6254kB/s), io=14.0MiB (14.7MB), run=1005-1011msec 00:19:56.780 00:19:56.780 Disk stats (read/write): 00:19:56.780 nvme0n1: ios=72/512, merge=0/0, ticks=1020/129, in_queue=1149, util=83.57% 00:19:56.780 nvme0n2: ios=570/1024, merge=0/0, ticks=1201/235, in_queue=1436, util=87.42% 00:19:56.780 nvme0n3: ios=49/512, merge=0/0, ticks=1568/122, in_queue=1690, util=91.40% 00:19:56.780 nvme0n4: ios=1081/1483, merge=0/0, ticks=528/309, in_queue=837, util=95.91% 00:19:56.780 23:05:29 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:56.780 [global] 00:19:56.780 thread=1 00:19:56.780 invalidate=1 00:19:56.780 rw=write 00:19:56.780 time_based=1 00:19:56.780 runtime=1 00:19:56.780 ioengine=libaio 00:19:56.780 direct=1 00:19:56.780 bs=4096 00:19:56.780 iodepth=128 00:19:56.780 norandommap=0 00:19:56.780 numjobs=1 00:19:56.780 00:19:56.780 verify_dump=1 00:19:56.780 verify_backlog=512 00:19:56.780 verify_state_save=0 00:19:56.780 do_verify=1 00:19:56.780 verify=crc32c-intel 00:19:56.780 [job0] 00:19:56.780 filename=/dev/nvme0n1 00:19:56.780 [job1] 00:19:56.780 filename=/dev/nvme0n2 00:19:56.780 [job2] 00:19:56.780 filename=/dev/nvme0n3 00:19:56.780 [job3] 00:19:56.780 filename=/dev/nvme0n4 00:19:56.780 Could not set queue depth (nvme0n1) 00:19:56.780 Could not set queue depth (nvme0n2) 00:19:56.780 Could not set queue depth (nvme0n3) 00:19:56.780 Could not set queue depth (nvme0n4) 00:19:57.038 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:57.038 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:57.038 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:57.038 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:57.038 fio-3.35 00:19:57.038 Starting 4 threads 00:19:58.417 00:19:58.417 job0: (groupid=0, jobs=1): err= 0: pid=3234385: Wed Jul 24 23:05:30 2024 00:19:58.417 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:19:58.417 slat (usec): min=2, max=15835, avg=87.31, stdev=653.76 00:19:58.417 clat (usec): min=5532, max=32916, avg=12044.50, stdev=4341.26 00:19:58.417 lat (usec): min=5538, max=32942, avg=12131.81, stdev=4373.74 00:19:58.417 clat percentiles (usec): 00:19:58.417 | 1.00th=[ 6849], 5.00th=[ 7832], 10.00th=[ 8848], 20.00th=[ 9503], 00:19:58.417 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10683], 60.00th=[11207], 00:19:58.417 | 70.00th=[11600], 80.00th=[14091], 90.00th=[17171], 95.00th=[21627], 00:19:58.417 | 99.00th=[29754], 99.50th=[30540], 99.90th=[30802], 99.95th=[30802], 00:19:58.417 | 99.99th=[32900] 00:19:58.417 write: IOPS=5590, BW=21.8MiB/s (22.9MB/s)(21.9MiB/1004msec); 0 zone resets 00:19:58.417 slat (usec): min=3, max=14186, avg=89.95, stdev=689.89 00:19:58.418 clat (usec): min=268, max=28032, avg=11673.30, stdev=4425.00 00:19:58.418 lat (usec): min=1989, max=28045, avg=11763.25, stdev=4441.14 00:19:58.418 clat percentiles (usec): 00:19:58.418 | 1.00th=[ 5145], 5.00th=[ 6063], 10.00th=[ 7570], 20.00th=[ 9503], 00:19:58.418 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[11207], 00:19:58.418 | 70.00th=[11731], 80.00th=[13173], 90.00th=[16057], 95.00th=[23725], 00:19:58.418 | 99.00th=[26870], 99.50th=[27919], 99.90th=[27919], 99.95th=[27919], 00:19:58.418 | 99.99th=[27919] 00:19:58.418 bw ( KiB/s): min=18616, max=25264, per=27.64%, avg=21940.00, stdev=4700.85, samples=2 00:19:58.418 iops : min= 4654, max= 6316, avg=5485.00, stdev=1175.21, samples=2 00:19:58.418 lat (usec) : 500=0.01% 00:19:58.418 lat (msec) : 2=0.08%, 4=0.12%, 10=29.19%, 20=63.74%, 50=6.86% 00:19:58.418 cpu : usr=5.68%, sys=9.57%, ctx=316, majf=0, minf=1 00:19:58.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:58.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:58.418 issued rwts: total=5120,5613,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:58.418 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:58.418 job1: (groupid=0, jobs=1): err= 0: pid=3234386: Wed Jul 24 23:05:30 2024 00:19:58.418 read: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec) 00:19:58.418 slat (usec): min=2, max=12942, avg=105.22, stdev=801.29 00:19:58.418 clat (usec): min=6635, max=36531, avg=14834.51, stdev=5242.77 00:19:58.418 lat (usec): min=6646, max=36548, avg=14939.73, stdev=5280.34 00:19:58.418 clat percentiles (usec): 00:19:58.418 | 1.00th=[ 7701], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[10028], 00:19:58.418 | 30.00th=[11469], 40.00th=[12256], 50.00th=[13960], 60.00th=[15926], 00:19:58.418 | 70.00th=[16712], 80.00th=[18220], 90.00th=[22676], 95.00th=[23987], 00:19:58.418 | 99.00th=[28967], 99.50th=[28967], 99.90th=[36439], 99.95th=[36439], 00:19:58.418 | 99.99th=[36439] 00:19:58.418 write: IOPS=3891, BW=15.2MiB/s (15.9MB/s)(15.4MiB/1013msec); 0 zone resets 00:19:58.418 slat (usec): min=3, max=14943, avg=148.66, stdev=910.14 00:19:58.418 clat (usec): min=4334, max=80848, avg=19024.89, stdev=17772.59 00:19:58.418 lat (usec): min=4348, max=80852, avg=19173.55, stdev=17885.93 00:19:58.418 clat percentiles (usec): 00:19:58.418 | 1.00th=[ 5669], 5.00th=[ 6783], 10.00th=[ 7570], 20.00th=[ 8586], 00:19:58.418 | 30.00th=[ 9241], 40.00th=[10552], 50.00th=[12387], 60.00th=[13829], 00:19:58.418 | 70.00th=[15664], 80.00th=[24249], 90.00th=[48497], 95.00th=[68682], 00:19:58.418 | 99.00th=[77071], 99.50th=[78119], 99.90th=[81265], 99.95th=[81265], 00:19:58.418 | 99.99th=[81265] 00:19:58.418 bw ( KiB/s): min=12288, max=18224, per=19.22%, avg=15256.00, stdev=4197.39, samples=2 00:19:58.418 iops : min= 3072, max= 4556, avg=3814.00, stdev=1049.35, samples=2 00:19:58.418 lat (msec) : 10=28.32%, 20=51.73%, 50=14.80%, 100=5.16% 00:19:58.418 cpu : usr=6.42%, sys=5.63%, ctx=250, majf=0, minf=1 00:19:58.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:58.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:58.418 issued rwts: total=3584,3942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:58.418 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:58.418 job2: (groupid=0, jobs=1): err= 0: pid=3234387: Wed Jul 24 23:05:30 2024 00:19:58.418 read: IOPS=5829, BW=22.8MiB/s (23.9MB/s)(23.0MiB/1008msec) 00:19:58.418 slat (usec): min=2, max=9433, avg=83.51, stdev=555.62 00:19:58.418 clat (usec): min=3521, max=22355, avg=11475.31, stdev=2481.76 00:19:58.418 lat (usec): min=5826, max=22367, avg=11558.82, stdev=2490.45 00:19:58.418 clat percentiles (usec): 00:19:58.418 | 1.00th=[ 7242], 5.00th=[ 7832], 10.00th=[ 8586], 20.00th=[ 9372], 00:19:58.418 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11207], 60.00th=[11863], 00:19:58.418 | 70.00th=[12518], 80.00th=[13042], 90.00th=[14484], 95.00th=[16581], 00:19:58.418 | 99.00th=[18482], 99.50th=[19268], 99.90th=[22414], 99.95th=[22414], 00:19:58.418 | 99.99th=[22414] 00:19:58.418 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:19:58.418 slat (usec): min=3, max=10078, avg=72.50, stdev=489.83 00:19:58.418 clat (usec): min=1959, max=22303, avg=9791.77, stdev=2426.18 00:19:58.418 lat (usec): min=1976, max=22331, avg=9864.27, stdev=2407.44 00:19:58.418 clat percentiles (usec): 00:19:58.418 | 1.00th=[ 4424], 5.00th=[ 6128], 10.00th=[ 6849], 20.00th=[ 8029], 00:19:58.418 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10159], 00:19:58.418 | 70.00th=[10683], 80.00th=[11076], 90.00th=[12649], 95.00th=[14353], 00:19:58.418 | 99.00th=[17695], 99.50th=[17695], 99.90th=[17957], 99.95th=[19792], 00:19:58.418 | 99.99th=[22414] 00:19:58.418 bw ( KiB/s): min=24576, max=24576, per=30.96%, avg=24576.00, stdev= 0.00, samples=2 00:19:58.418 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:19:58.418 lat (msec) : 2=0.02%, 4=0.33%, 10=41.74%, 20=57.78%, 50=0.12% 00:19:58.418 cpu : usr=8.54%, sys=11.52%, ctx=330, majf=0, minf=1 00:19:58.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:58.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:58.418 issued rwts: total=5876,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:58.418 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:58.418 job3: (groupid=0, jobs=1): err= 0: pid=3234388: Wed Jul 24 23:05:30 2024 00:19:58.418 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:19:58.418 slat (usec): min=2, max=13162, avg=94.26, stdev=622.91 00:19:58.418 clat (usec): min=3973, max=31074, avg=12718.89, stdev=4229.94 00:19:58.418 lat (usec): min=3982, max=34280, avg=12813.15, stdev=4259.95 00:19:58.418 clat percentiles (usec): 00:19:58.418 | 1.00th=[ 6063], 5.00th=[ 8160], 10.00th=[ 9110], 20.00th=[10028], 00:19:58.418 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11600], 60.00th=[12125], 00:19:58.418 | 70.00th=[13042], 80.00th=[14353], 90.00th=[18220], 95.00th=[22676], 00:19:58.418 | 99.00th=[26346], 99.50th=[27919], 99.90th=[30802], 99.95th=[30802], 00:19:58.418 | 99.99th=[31065] 00:19:58.418 write: IOPS=4368, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1008msec); 0 zone resets 00:19:58.418 slat (usec): min=2, max=11012, avg=131.07, stdev=759.02 00:19:58.418 clat (usec): min=860, max=83436, avg=17152.80, stdev=17202.28 00:19:58.418 lat (usec): min=877, max=83451, avg=17283.87, stdev=17322.72 00:19:58.418 clat percentiles (usec): 00:19:58.418 | 1.00th=[ 4015], 5.00th=[ 7439], 10.00th=[ 8979], 20.00th=[10814], 00:19:58.418 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11863], 60.00th=[12256], 00:19:58.418 | 70.00th=[13042], 80.00th=[13566], 90.00th=[30540], 95.00th=[74974], 00:19:58.418 | 99.00th=[80217], 99.50th=[80217], 99.90th=[83362], 99.95th=[83362], 00:19:58.418 | 99.99th=[83362] 00:19:58.418 bw ( KiB/s): min=10120, max=24080, per=21.54%, avg=17100.00, stdev=9871.21, samples=2 00:19:58.418 iops : min= 2530, max= 6020, avg=4275.00, stdev=2467.80, samples=2 00:19:58.418 lat (usec) : 1000=0.04% 00:19:58.418 lat (msec) : 2=0.07%, 4=0.38%, 10=16.78%, 20=72.77%, 50=5.85% 00:19:58.418 lat (msec) : 100=4.12% 00:19:58.418 cpu : usr=3.77%, sys=7.35%, ctx=354, majf=0, minf=1 00:19:58.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:58.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:58.418 issued rwts: total=4096,4403,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:58.418 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:58.418 00:19:58.418 Run status group 0 (all jobs): 00:19:58.418 READ: bw=72.0MiB/s (75.5MB/s), 13.8MiB/s-22.8MiB/s (14.5MB/s-23.9MB/s), io=73.0MiB (76.5MB), run=1004-1013msec 00:19:58.418 WRITE: bw=77.5MiB/s (81.3MB/s), 15.2MiB/s-23.8MiB/s (15.9MB/s-25.0MB/s), io=78.5MiB (82.3MB), run=1004-1013msec 00:19:58.418 00:19:58.418 Disk stats (read/write): 00:19:58.418 nvme0n1: ios=4136/4400, merge=0/0, ticks=40718/38944, in_queue=79662, util=91.78% 00:19:58.418 nvme0n2: ios=3028/3072, merge=0/0, ticks=45637/56009, in_queue=101646, util=97.85% 00:19:58.418 nvme0n3: ios=4674/5120, merge=0/0, ticks=52471/47832, in_queue=100303, util=95.96% 00:19:58.418 nvme0n4: ios=3624/3823, merge=0/0, ticks=28312/35751, in_queue=64063, util=99.35% 00:19:58.418 23:05:30 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:58.418 [global] 00:19:58.418 thread=1 00:19:58.418 invalidate=1 00:19:58.418 rw=randwrite 00:19:58.418 time_based=1 00:19:58.418 runtime=1 00:19:58.418 ioengine=libaio 00:19:58.418 direct=1 00:19:58.418 bs=4096 00:19:58.418 iodepth=128 00:19:58.418 norandommap=0 00:19:58.418 numjobs=1 00:19:58.418 00:19:58.418 verify_dump=1 00:19:58.418 verify_backlog=512 00:19:58.418 verify_state_save=0 00:19:58.418 do_verify=1 00:19:58.418 verify=crc32c-intel 00:19:58.418 [job0] 00:19:58.418 filename=/dev/nvme0n1 00:19:58.418 [job1] 00:19:58.418 filename=/dev/nvme0n2 00:19:58.418 [job2] 00:19:58.418 filename=/dev/nvme0n3 00:19:58.418 [job3] 00:19:58.418 filename=/dev/nvme0n4 00:19:58.418 Could not set queue depth (nvme0n1) 00:19:58.418 Could not set queue depth (nvme0n2) 00:19:58.418 Could not set queue depth (nvme0n3) 00:19:58.418 Could not set queue depth (nvme0n4) 00:19:58.677 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:58.677 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:58.677 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:58.677 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:58.677 fio-3.35 00:19:58.677 Starting 4 threads 00:20:00.054 00:20:00.054 job0: (groupid=0, jobs=1): err= 0: pid=3234817: Wed Jul 24 23:05:32 2024 00:20:00.054 read: IOPS=5481, BW=21.4MiB/s (22.5MB/s)(21.5MiB/1005msec) 00:20:00.054 slat (nsec): min=1688, max=15058k, avg=80120.61, stdev=565438.02 00:20:00.054 clat (usec): min=1382, max=41803, avg=11120.83, stdev=4335.98 00:20:00.054 lat (usec): min=1386, max=41808, avg=11200.95, stdev=4363.68 00:20:00.054 clat percentiles (usec): 00:20:00.054 | 1.00th=[ 4146], 5.00th=[ 5800], 10.00th=[ 6915], 20.00th=[ 7898], 00:20:00.054 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[10421], 60.00th=[11207], 00:20:00.054 | 70.00th=[12387], 80.00th=[13566], 90.00th=[16450], 95.00th=[18220], 00:20:00.054 | 99.00th=[25035], 99.50th=[36439], 99.90th=[37487], 99.95th=[41681], 00:20:00.054 | 99.99th=[41681] 00:20:00.054 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:20:00.054 slat (usec): min=2, max=10687, avg=81.92, stdev=525.71 00:20:00.054 clat (usec): min=631, max=42993, avg=11752.76, stdev=7870.95 00:20:00.054 lat (usec): min=666, max=43955, avg=11834.68, stdev=7916.13 00:20:00.054 clat percentiles (usec): 00:20:00.054 | 1.00th=[ 2278], 5.00th=[ 4047], 10.00th=[ 5211], 20.00th=[ 7046], 00:20:00.054 | 30.00th=[ 8291], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10028], 00:20:00.054 | 70.00th=[11469], 80.00th=[13698], 90.00th=[22676], 95.00th=[31589], 00:20:00.054 | 99.00th=[40109], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:20:00.054 | 99.99th=[43254] 00:20:00.054 bw ( KiB/s): min=20480, max=24576, per=30.74%, avg=22528.00, stdev=2896.31, samples=2 00:20:00.054 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:20:00.054 lat (usec) : 750=0.02%, 1000=0.01% 00:20:00.054 lat (msec) : 2=0.36%, 4=2.40%, 10=50.93%, 20=38.86%, 50=7.43% 00:20:00.054 cpu : usr=6.77%, sys=7.27%, ctx=478, majf=0, minf=1 00:20:00.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:00.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:00.055 issued rwts: total=5509,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:00.055 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:00.055 job1: (groupid=0, jobs=1): err= 0: pid=3234818: Wed Jul 24 23:05:32 2024 00:20:00.055 read: IOPS=3696, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1004msec) 00:20:00.055 slat (nsec): min=1880, max=25480k, avg=132379.78, stdev=982799.15 00:20:00.055 clat (usec): min=643, max=76794, avg=17846.90, stdev=9740.90 00:20:00.055 lat (usec): min=6189, max=76800, avg=17979.28, stdev=9796.58 00:20:00.055 clat percentiles (usec): 00:20:00.055 | 1.00th=[ 7177], 5.00th=[ 8356], 10.00th=[ 9241], 20.00th=[10028], 00:20:00.055 | 30.00th=[11076], 40.00th=[12256], 50.00th=[13304], 60.00th=[16188], 00:20:00.055 | 70.00th=[21627], 80.00th=[25560], 90.00th=[33817], 95.00th=[38011], 00:20:00.055 | 99.00th=[43254], 99.50th=[51643], 99.90th=[51643], 99.95th=[77071], 00:20:00.055 | 99.99th=[77071] 00:20:00.055 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:20:00.055 slat (usec): min=2, max=23282, avg=105.73, stdev=805.02 00:20:00.055 clat (usec): min=2662, max=44654, avg=14865.43, stdev=6872.46 00:20:00.055 lat (usec): min=2788, max=44665, avg=14971.16, stdev=6915.23 00:20:00.055 clat percentiles (usec): 00:20:00.055 | 1.00th=[ 4228], 5.00th=[ 5997], 10.00th=[ 7373], 20.00th=[ 8979], 00:20:00.055 | 30.00th=[ 9765], 40.00th=[11600], 50.00th=[13566], 60.00th=[15533], 00:20:00.055 | 70.00th=[17957], 80.00th=[20841], 90.00th=[23987], 95.00th=[26346], 00:20:00.055 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[41157], 00:20:00.055 | 99.99th=[44827] 00:20:00.055 bw ( KiB/s): min=13040, max=19720, per=22.35%, avg=16380.00, stdev=4723.47, samples=2 00:20:00.055 iops : min= 3260, max= 4930, avg=4095.00, stdev=1180.87, samples=2 00:20:00.055 lat (usec) : 750=0.01% 00:20:00.055 lat (msec) : 4=0.28%, 10=25.30%, 20=45.54%, 50=28.62%, 100=0.26% 00:20:00.055 cpu : usr=2.99%, sys=7.18%, ctx=288, majf=0, minf=1 00:20:00.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:00.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:00.055 issued rwts: total=3711,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:00.055 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:00.055 job2: (groupid=0, jobs=1): err= 0: pid=3234819: Wed Jul 24 23:05:32 2024 00:20:00.055 read: IOPS=3914, BW=15.3MiB/s (16.0MB/s)(15.4MiB/1006msec) 00:20:00.055 slat (nsec): min=1736, max=44392k, avg=140892.24, stdev=1181578.14 00:20:00.055 clat (usec): min=3646, max=66684, avg=18443.18, stdev=11505.47 00:20:00.055 lat (usec): min=3655, max=66691, avg=18584.07, stdev=11566.55 00:20:00.055 clat percentiles (usec): 00:20:00.055 | 1.00th=[ 5276], 5.00th=[ 8094], 10.00th=[ 9765], 20.00th=[10814], 00:20:00.055 | 30.00th=[11469], 40.00th=[12387], 50.00th=[13698], 60.00th=[15533], 00:20:00.055 | 70.00th=[21103], 80.00th=[24511], 90.00th=[36963], 95.00th=[41681], 00:20:00.055 | 99.00th=[60031], 99.50th=[66847], 99.90th=[66847], 99.95th=[66847], 00:20:00.055 | 99.99th=[66847] 00:20:00.055 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:20:00.055 slat (usec): min=2, max=14968, avg=98.45, stdev=716.96 00:20:00.055 clat (usec): min=1800, max=49514, avg=13403.94, stdev=5922.87 00:20:00.055 lat (usec): min=1813, max=49524, avg=13502.39, stdev=5960.85 00:20:00.055 clat percentiles (usec): 00:20:00.055 | 1.00th=[ 5145], 5.00th=[ 6652], 10.00th=[ 7635], 20.00th=[ 9634], 00:20:00.055 | 30.00th=[10814], 40.00th=[11469], 50.00th=[11994], 60.00th=[12518], 00:20:00.055 | 70.00th=[13829], 80.00th=[16712], 90.00th=[19792], 95.00th=[27132], 00:20:00.055 | 99.00th=[35390], 99.50th=[39584], 99.90th=[45351], 99.95th=[49546], 00:20:00.055 | 99.99th=[49546] 00:20:00.055 bw ( KiB/s): min=16120, max=16648, per=22.36%, avg=16384.00, stdev=373.35, samples=2 00:20:00.055 iops : min= 4030, max= 4162, avg=4096.00, stdev=93.34, samples=2 00:20:00.055 lat (msec) : 2=0.10%, 4=0.41%, 10=18.80%, 20=60.34%, 50=18.77% 00:20:00.055 lat (msec) : 100=1.58% 00:20:00.055 cpu : usr=3.28%, sys=6.37%, ctx=250, majf=0, minf=1 00:20:00.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:00.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:00.055 issued rwts: total=3938,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:00.055 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:00.055 job3: (groupid=0, jobs=1): err= 0: pid=3234820: Wed Jul 24 23:05:32 2024 00:20:00.055 read: IOPS=4570, BW=17.9MiB/s (18.7MB/s)(17.9MiB/1004msec) 00:20:00.055 slat (nsec): min=1792, max=13489k, avg=105208.53, stdev=715373.07 00:20:00.055 clat (usec): min=865, max=43693, avg=14155.25, stdev=5710.50 00:20:00.055 lat (usec): min=4459, max=43705, avg=14260.45, stdev=5765.66 00:20:00.055 clat percentiles (usec): 00:20:00.055 | 1.00th=[ 5866], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[10421], 00:20:00.055 | 30.00th=[11338], 40.00th=[11994], 50.00th=[12780], 60.00th=[13566], 00:20:00.055 | 70.00th=[14484], 80.00th=[16581], 90.00th=[20317], 95.00th=[27919], 00:20:00.055 | 99.00th=[34866], 99.50th=[40633], 99.90th=[40633], 99.95th=[42730], 00:20:00.055 | 99.99th=[43779] 00:20:00.055 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:20:00.055 slat (usec): min=2, max=15649, avg=99.52, stdev=675.55 00:20:00.055 clat (usec): min=1495, max=44124, avg=13474.67, stdev=5730.84 00:20:00.055 lat (usec): min=1510, max=44135, avg=13574.19, stdev=5771.13 00:20:00.055 clat percentiles (usec): 00:20:00.055 | 1.00th=[ 5669], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[10028], 00:20:00.055 | 30.00th=[10552], 40.00th=[11731], 50.00th=[12256], 60.00th=[12649], 00:20:00.055 | 70.00th=[13435], 80.00th=[15401], 90.00th=[19268], 95.00th=[23725], 00:20:00.055 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:20:00.055 | 99.99th=[44303] 00:20:00.055 bw ( KiB/s): min=16384, max=20480, per=25.15%, avg=18432.00, stdev=2896.31, samples=2 00:20:00.055 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:20:00.055 lat (usec) : 1000=0.01% 00:20:00.055 lat (msec) : 2=0.02%, 4=0.22%, 10=16.67%, 20=73.31%, 50=9.77% 00:20:00.055 cpu : usr=4.89%, sys=5.98%, ctx=365, majf=0, minf=1 00:20:00.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:20:00.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:00.055 issued rwts: total=4589,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:00.055 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:00.055 00:20:00.055 Run status group 0 (all jobs): 00:20:00.055 READ: bw=68.9MiB/s (72.3MB/s), 14.4MiB/s-21.4MiB/s (15.1MB/s-22.5MB/s), io=69.3MiB (72.7MB), run=1004-1006msec 00:20:00.055 WRITE: bw=71.6MiB/s (75.0MB/s), 15.9MiB/s-21.9MiB/s (16.7MB/s-23.0MB/s), io=72.0MiB (75.5MB), run=1004-1006msec 00:20:00.055 00:20:00.055 Disk stats (read/write): 00:20:00.055 nvme0n1: ios=4411/4608, merge=0/0, ticks=46119/52060, in_queue=98179, util=89.48% 00:20:00.055 nvme0n2: ios=3074/3093, merge=0/0, ticks=29999/25385, in_queue=55384, util=99.69% 00:20:00.055 nvme0n3: ios=3533/3584, merge=0/0, ticks=30421/26023, in_queue=56444, util=89.04% 00:20:00.055 nvme0n4: ios=3630/3707, merge=0/0, ticks=28843/25203, in_queue=54046, util=95.47% 00:20:00.055 23:05:32 -- target/fio.sh@55 -- # sync 00:20:00.055 23:05:32 -- target/fio.sh@59 -- # fio_pid=3235085 00:20:00.055 23:05:32 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:00.055 23:05:32 -- target/fio.sh@61 -- # sleep 3 00:20:00.055 [global] 00:20:00.055 thread=1 00:20:00.055 invalidate=1 00:20:00.055 rw=read 00:20:00.055 time_based=1 00:20:00.055 runtime=10 00:20:00.055 ioengine=libaio 00:20:00.055 direct=1 00:20:00.055 bs=4096 00:20:00.055 iodepth=1 00:20:00.055 norandommap=1 00:20:00.055 numjobs=1 00:20:00.055 00:20:00.055 [job0] 00:20:00.055 filename=/dev/nvme0n1 00:20:00.055 [job1] 00:20:00.055 filename=/dev/nvme0n2 00:20:00.055 [job2] 00:20:00.055 filename=/dev/nvme0n3 00:20:00.055 [job3] 00:20:00.055 filename=/dev/nvme0n4 00:20:00.055 Could not set queue depth (nvme0n1) 00:20:00.055 Could not set queue depth (nvme0n2) 00:20:00.055 Could not set queue depth (nvme0n3) 00:20:00.055 Could not set queue depth (nvme0n4) 00:20:00.314 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:00.314 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:00.314 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:00.314 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:00.314 fio-3.35 00:20:00.314 Starting 4 threads 00:20:03.601 23:05:35 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:03.601 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=278528, buflen=4096 00:20:03.601 fio: pid=3235249, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:03.601 23:05:35 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:03.601 23:05:35 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:03.601 23:05:35 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:03.601 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=389120, buflen=4096 00:20:03.601 fio: pid=3235246, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:03.601 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=26509312, buflen=4096 00:20:03.601 fio: pid=3235244, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:03.601 23:05:35 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:03.601 23:05:35 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:03.601 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=29634560, buflen=4096 00:20:03.601 fio: pid=3235245, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:03.860 23:05:36 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:03.860 23:05:36 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:03.860 00:20:03.860 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3235244: Wed Jul 24 23:05:36 2024 00:20:03.860 read: IOPS=2174, BW=8696KiB/s (8905kB/s)(25.3MiB/2977msec) 00:20:03.860 slat (usec): min=8, max=15276, avg=13.92, stdev=241.84 00:20:03.860 clat (usec): min=343, max=19278, avg=441.30, stdev=239.46 00:20:03.860 lat (usec): min=366, max=34555, avg=455.21, stdev=453.62 00:20:03.860 clat percentiles (usec): 00:20:03.860 | 1.00th=[ 367], 5.00th=[ 375], 10.00th=[ 379], 20.00th=[ 388], 00:20:03.860 | 30.00th=[ 396], 40.00th=[ 412], 50.00th=[ 441], 60.00th=[ 457], 00:20:03.860 | 70.00th=[ 469], 80.00th=[ 478], 90.00th=[ 498], 95.00th=[ 519], 00:20:03.860 | 99.00th=[ 578], 99.50th=[ 635], 99.90th=[ 652], 99.95th=[ 717], 00:20:03.860 | 99.99th=[19268] 00:20:03.860 bw ( KiB/s): min= 7960, max= 9880, per=51.34%, avg=8996.80, stdev=734.10, samples=5 00:20:03.860 iops : min= 1990, max= 2470, avg=2249.20, stdev=183.52, samples=5 00:20:03.860 lat (usec) : 500=90.72%, 750=9.24%, 1000=0.02% 00:20:03.860 lat (msec) : 20=0.02% 00:20:03.860 cpu : usr=1.28%, sys=2.99%, ctx=6476, majf=0, minf=1 00:20:03.860 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:03.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.860 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.860 issued rwts: total=6473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.860 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:03.860 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3235245: Wed Jul 24 23:05:36 2024 00:20:03.860 read: IOPS=2285, BW=9141KiB/s (9360kB/s)(28.3MiB/3166msec) 00:20:03.860 slat (usec): min=5, max=27753, avg=25.77, stdev=571.46 00:20:03.860 clat (usec): min=305, max=9933, avg=405.62, stdev=120.34 00:20:03.860 lat (usec): min=314, max=28283, avg=431.39, stdev=587.46 00:20:03.860 clat percentiles (usec): 00:20:03.860 | 1.00th=[ 355], 5.00th=[ 371], 10.00th=[ 379], 20.00th=[ 383], 00:20:03.860 | 30.00th=[ 392], 40.00th=[ 396], 50.00th=[ 396], 60.00th=[ 400], 00:20:03.860 | 70.00th=[ 408], 80.00th=[ 412], 90.00th=[ 429], 95.00th=[ 465], 00:20:03.860 | 99.00th=[ 519], 99.50th=[ 627], 99.90th=[ 840], 99.95th=[ 930], 00:20:03.860 | 99.99th=[ 9896] 00:20:03.860 bw ( KiB/s): min= 7727, max= 9864, per=53.49%, avg=9373.17, stdev=819.15, samples=6 00:20:03.860 iops : min= 1931, max= 2466, avg=2343.17, stdev=205.09, samples=6 00:20:03.860 lat (usec) : 500=97.39%, 750=2.22%, 1000=0.33% 00:20:03.860 lat (msec) : 2=0.03%, 10=0.01% 00:20:03.860 cpu : usr=1.64%, sys=3.98%, ctx=7245, majf=0, minf=1 00:20:03.860 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:03.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.860 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.860 issued rwts: total=7236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.860 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:03.860 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3235246: Wed Jul 24 23:05:36 2024 00:20:03.860 read: IOPS=33, BW=134KiB/s (138kB/s)(380KiB/2827msec) 00:20:03.860 slat (nsec): min=8909, max=61011, avg=14779.14, stdev=7151.94 00:20:03.860 clat (usec): min=420, max=41973, avg=29523.33, stdev=18387.65 00:20:03.860 lat (usec): min=433, max=41983, avg=29538.00, stdev=18389.10 00:20:03.860 clat percentiles (usec): 00:20:03.860 | 1.00th=[ 420], 5.00th=[ 453], 10.00th=[ 457], 20.00th=[ 494], 00:20:03.860 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:03.860 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:20:03.860 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:03.860 | 99.99th=[42206] 00:20:03.860 bw ( KiB/s): min= 96, max= 304, per=0.79%, avg=139.20, stdev=92.19, samples=5 00:20:03.860 iops : min= 24, max= 76, avg=34.80, stdev=23.05, samples=5 00:20:03.860 lat (usec) : 500=19.79%, 750=8.33% 00:20:03.860 lat (msec) : 50=70.83% 00:20:03.860 cpu : usr=0.00%, sys=0.07%, ctx=97, majf=0, minf=1 00:20:03.860 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:03.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.860 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.860 issued rwts: total=96,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.860 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:03.860 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3235249: Wed Jul 24 23:05:36 2024 00:20:03.860 read: IOPS=26, BW=104KiB/s (106kB/s)(272KiB/2618msec) 00:20:03.860 slat (nsec): min=10693, max=35547, avg=17837.22, stdev=5894.57 00:20:03.860 clat (usec): min=529, max=42057, avg=38169.95, stdev=10256.06 00:20:03.860 lat (usec): min=542, max=42083, avg=38187.88, stdev=10254.76 00:20:03.860 clat percentiles (usec): 00:20:03.860 | 1.00th=[ 529], 5.00th=[ 709], 10.00th=[41157], 20.00th=[41157], 00:20:03.860 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:03.860 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:20:03.860 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:03.860 | 99.99th=[42206] 00:20:03.860 bw ( KiB/s): min= 96, max= 128, per=0.59%, avg=104.00, stdev=13.86, samples=5 00:20:03.860 iops : min= 24, max= 32, avg=26.00, stdev= 3.46, samples=5 00:20:03.860 lat (usec) : 750=5.80% 00:20:03.860 lat (msec) : 10=1.45%, 50=91.30% 00:20:03.860 cpu : usr=0.00%, sys=0.08%, ctx=71, majf=0, minf=2 00:20:03.860 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:03.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.860 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.860 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.860 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:03.860 00:20:03.860 Run status group 0 (all jobs): 00:20:03.860 READ: bw=17.1MiB/s (17.9MB/s), 104KiB/s-9141KiB/s (106kB/s-9360kB/s), io=54.2MiB (56.8MB), run=2618-3166msec 00:20:03.860 00:20:03.860 Disk stats (read/write): 00:20:03.860 nvme0n1: ios=6202/0, merge=0/0, ticks=2656/0, in_queue=2656, util=93.59% 00:20:03.860 nvme0n2: ios=7199/0, merge=0/0, ticks=3777/0, in_queue=3777, util=96.84% 00:20:03.860 nvme0n3: ios=89/0, merge=0/0, ticks=2559/0, in_queue=2559, util=95.94% 00:20:03.860 nvme0n4: ios=101/0, merge=0/0, ticks=3251/0, in_queue=3251, util=99.18% 00:20:03.860 23:05:36 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:03.860 23:05:36 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:04.120 23:05:36 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:04.120 23:05:36 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:04.379 23:05:36 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:04.379 23:05:36 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:04.379 23:05:36 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:04.379 23:05:36 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:04.639 23:05:36 -- target/fio.sh@69 -- # fio_status=0 00:20:04.639 23:05:36 -- target/fio.sh@70 -- # wait 3235085 00:20:04.639 23:05:36 -- target/fio.sh@70 -- # fio_status=4 00:20:04.639 23:05:36 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:04.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:04.639 23:05:37 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:04.639 23:05:37 -- common/autotest_common.sh@1198 -- # local i=0 00:20:04.639 23:05:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:04.639 23:05:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:04.898 23:05:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:04.898 23:05:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:04.898 23:05:37 -- common/autotest_common.sh@1210 -- # return 0 00:20:04.898 23:05:37 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:04.898 23:05:37 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:04.898 nvmf hotplug test: fio failed as expected 00:20:04.898 23:05:37 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:04.898 23:05:37 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:04.898 23:05:37 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:04.898 23:05:37 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:04.898 23:05:37 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:04.898 23:05:37 -- target/fio.sh@91 -- # nvmftestfini 00:20:04.898 23:05:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:04.898 23:05:37 -- nvmf/common.sh@116 -- # sync 00:20:04.898 23:05:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:04.899 23:05:37 -- nvmf/common.sh@119 -- # set +e 00:20:04.899 23:05:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:04.899 23:05:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:04.899 rmmod nvme_tcp 00:20:04.899 rmmod nvme_fabrics 00:20:05.158 rmmod nvme_keyring 00:20:05.158 23:05:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:05.158 23:05:37 -- nvmf/common.sh@123 -- # set -e 00:20:05.158 23:05:37 -- nvmf/common.sh@124 -- # return 0 00:20:05.158 23:05:37 -- nvmf/common.sh@477 -- # '[' -n 3231991 ']' 00:20:05.158 23:05:37 -- nvmf/common.sh@478 -- # killprocess 3231991 00:20:05.158 23:05:37 -- common/autotest_common.sh@926 -- # '[' -z 3231991 ']' 00:20:05.158 23:05:37 -- common/autotest_common.sh@930 -- # kill -0 3231991 00:20:05.158 23:05:37 -- common/autotest_common.sh@931 -- # uname 00:20:05.158 23:05:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:05.158 23:05:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3231991 00:20:05.158 23:05:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:05.158 23:05:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:05.158 23:05:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3231991' 00:20:05.158 killing process with pid 3231991 00:20:05.158 23:05:37 -- common/autotest_common.sh@945 -- # kill 3231991 00:20:05.158 23:05:37 -- common/autotest_common.sh@950 -- # wait 3231991 00:20:05.417 23:05:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:05.417 23:05:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:05.417 23:05:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:05.417 23:05:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:05.417 23:05:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:05.417 23:05:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.417 23:05:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.417 23:05:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.393 23:05:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:07.393 00:20:07.393 real 0m27.768s 00:20:07.393 user 2m3.534s 00:20:07.393 sys 0m9.703s 00:20:07.393 23:05:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:07.393 23:05:39 -- common/autotest_common.sh@10 -- # set +x 00:20:07.393 ************************************ 00:20:07.393 END TEST nvmf_fio_target 00:20:07.393 ************************************ 00:20:07.393 23:05:39 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:07.393 23:05:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:07.393 23:05:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:07.393 23:05:39 -- common/autotest_common.sh@10 -- # set +x 00:20:07.393 ************************************ 00:20:07.393 START TEST nvmf_bdevio 00:20:07.393 ************************************ 00:20:07.393 23:05:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:07.393 * Looking for test storage... 00:20:07.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:07.393 23:05:39 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:07.393 23:05:39 -- nvmf/common.sh@7 -- # uname -s 00:20:07.393 23:05:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.393 23:05:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.393 23:05:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.393 23:05:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.393 23:05:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.393 23:05:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.393 23:05:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.394 23:05:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.394 23:05:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.394 23:05:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.653 23:05:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:07.653 23:05:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:07.653 23:05:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.653 23:05:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.653 23:05:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:07.653 23:05:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:07.653 23:05:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.653 23:05:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.653 23:05:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.653 23:05:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.653 23:05:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.653 23:05:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.653 23:05:39 -- paths/export.sh@5 -- # export PATH 00:20:07.653 23:05:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.653 23:05:39 -- nvmf/common.sh@46 -- # : 0 00:20:07.653 23:05:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:07.653 23:05:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:07.653 23:05:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:07.653 23:05:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.653 23:05:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.653 23:05:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:07.653 23:05:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:07.653 23:05:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:07.653 23:05:39 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:07.653 23:05:39 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:07.653 23:05:39 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:07.653 23:05:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:07.653 23:05:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.653 23:05:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:07.653 23:05:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:07.653 23:05:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:07.653 23:05:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.653 23:05:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.653 23:05:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.653 23:05:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:07.653 23:05:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:07.653 23:05:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:07.653 23:05:39 -- common/autotest_common.sh@10 -- # set +x 00:20:14.230 23:05:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:14.230 23:05:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:14.230 23:05:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:14.230 23:05:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:14.230 23:05:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:14.230 23:05:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:14.230 23:05:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:14.230 23:05:46 -- nvmf/common.sh@294 -- # net_devs=() 00:20:14.230 23:05:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:14.230 23:05:46 -- nvmf/common.sh@295 -- # e810=() 00:20:14.230 23:05:46 -- nvmf/common.sh@295 -- # local -ga e810 00:20:14.230 23:05:46 -- nvmf/common.sh@296 -- # x722=() 00:20:14.230 23:05:46 -- nvmf/common.sh@296 -- # local -ga x722 00:20:14.230 23:05:46 -- nvmf/common.sh@297 -- # mlx=() 00:20:14.230 23:05:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:14.230 23:05:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.230 23:05:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.230 23:05:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.230 23:05:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.230 23:05:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.230 23:05:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.230 23:05:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.230 23:05:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.230 23:05:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.230 23:05:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.230 23:05:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.230 23:05:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:14.230 23:05:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:14.230 23:05:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:14.230 23:05:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:14.230 23:05:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:14.230 23:05:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:14.230 23:05:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:14.230 23:05:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:14.230 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:14.230 23:05:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:14.230 23:05:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:14.230 23:05:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.230 23:05:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.230 23:05:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:14.230 23:05:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:14.230 23:05:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:14.230 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:14.230 23:05:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:14.230 23:05:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:14.230 23:05:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.230 23:05:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.230 23:05:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:14.230 23:05:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:14.230 23:05:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:14.230 23:05:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:14.230 23:05:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:14.230 23:05:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.230 23:05:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:14.230 23:05:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.230 23:05:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:14.230 Found net devices under 0000:af:00.0: cvl_0_0 00:20:14.230 23:05:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.230 23:05:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:14.230 23:05:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.230 23:05:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:14.230 23:05:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.230 23:05:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:14.230 Found net devices under 0000:af:00.1: cvl_0_1 00:20:14.230 23:05:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.230 23:05:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:14.230 23:05:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:14.230 23:05:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:14.230 23:05:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:14.230 23:05:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:14.230 23:05:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.230 23:05:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.230 23:05:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.230 23:05:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:14.230 23:05:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:14.230 23:05:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:14.230 23:05:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:14.230 23:05:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:14.230 23:05:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.230 23:05:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:14.230 23:05:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:14.230 23:05:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:14.230 23:05:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:14.230 23:05:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:14.230 23:05:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:14.230 23:05:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:14.230 23:05:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.230 23:05:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.230 23:05:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.230 23:05:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:14.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:20:14.230 00:20:14.230 --- 10.0.0.2 ping statistics --- 00:20:14.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.231 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:20:14.231 23:05:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:20:14.231 00:20:14.231 --- 10.0.0.1 ping statistics --- 00:20:14.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.231 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:20:14.231 23:05:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.231 23:05:46 -- nvmf/common.sh@410 -- # return 0 00:20:14.231 23:05:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:14.231 23:05:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.231 23:05:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:14.231 23:05:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:14.231 23:05:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.231 23:05:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:14.231 23:05:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:14.231 23:05:46 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:14.231 23:05:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:14.231 23:05:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:14.231 23:05:46 -- common/autotest_common.sh@10 -- # set +x 00:20:14.231 23:05:46 -- nvmf/common.sh@469 -- # nvmfpid=3239760 00:20:14.231 23:05:46 -- nvmf/common.sh@470 -- # waitforlisten 3239760 00:20:14.231 23:05:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:14.231 23:05:46 -- common/autotest_common.sh@819 -- # '[' -z 3239760 ']' 00:20:14.231 23:05:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.231 23:05:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:14.231 23:05:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.231 23:05:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:14.231 23:05:46 -- common/autotest_common.sh@10 -- # set +x 00:20:14.231 [2024-07-24 23:05:46.479780] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:14.231 [2024-07-24 23:05:46.479826] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.231 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.231 [2024-07-24 23:05:46.555463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:14.231 [2024-07-24 23:05:46.591110] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:14.231 [2024-07-24 23:05:46.591220] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.231 [2024-07-24 23:05:46.591230] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.231 [2024-07-24 23:05:46.591239] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.231 [2024-07-24 23:05:46.591358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:14.231 [2024-07-24 23:05:46.591452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:14.231 [2024-07-24 23:05:46.591540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.231 [2024-07-24 23:05:46.591541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:15.169 23:05:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:15.169 23:05:47 -- common/autotest_common.sh@852 -- # return 0 00:20:15.169 23:05:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:15.169 23:05:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:15.169 23:05:47 -- common/autotest_common.sh@10 -- # set +x 00:20:15.169 23:05:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.169 23:05:47 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:15.169 23:05:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:15.169 23:05:47 -- common/autotest_common.sh@10 -- # set +x 00:20:15.169 [2024-07-24 23:05:47.333107] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.169 23:05:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:15.169 23:05:47 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:15.169 23:05:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:15.169 23:05:47 -- common/autotest_common.sh@10 -- # set +x 00:20:15.169 Malloc0 00:20:15.169 23:05:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:15.169 23:05:47 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:15.169 23:05:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:15.169 23:05:47 -- common/autotest_common.sh@10 -- # set +x 00:20:15.169 23:05:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:15.169 23:05:47 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:15.169 23:05:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:15.169 23:05:47 -- common/autotest_common.sh@10 -- # set +x 00:20:15.169 23:05:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:15.169 23:05:47 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:15.169 23:05:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:15.169 23:05:47 -- common/autotest_common.sh@10 -- # set +x 00:20:15.169 [2024-07-24 23:05:47.387312] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.169 23:05:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:15.169 23:05:47 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:15.169 23:05:47 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:15.169 23:05:47 -- nvmf/common.sh@520 -- # config=() 00:20:15.169 23:05:47 -- nvmf/common.sh@520 -- # local subsystem config 00:20:15.169 23:05:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:15.169 23:05:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:15.169 { 00:20:15.169 "params": { 00:20:15.169 "name": "Nvme$subsystem", 00:20:15.169 "trtype": "$TEST_TRANSPORT", 00:20:15.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.169 "adrfam": "ipv4", 00:20:15.169 "trsvcid": "$NVMF_PORT", 00:20:15.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.169 "hdgst": ${hdgst:-false}, 00:20:15.169 "ddgst": ${ddgst:-false} 00:20:15.169 }, 00:20:15.169 "method": "bdev_nvme_attach_controller" 00:20:15.169 } 00:20:15.169 EOF 00:20:15.169 )") 00:20:15.169 23:05:47 -- nvmf/common.sh@542 -- # cat 00:20:15.169 23:05:47 -- nvmf/common.sh@544 -- # jq . 00:20:15.169 23:05:47 -- nvmf/common.sh@545 -- # IFS=, 00:20:15.169 23:05:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:15.169 "params": { 00:20:15.169 "name": "Nvme1", 00:20:15.169 "trtype": "tcp", 00:20:15.169 "traddr": "10.0.0.2", 00:20:15.169 "adrfam": "ipv4", 00:20:15.169 "trsvcid": "4420", 00:20:15.169 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.169 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:15.169 "hdgst": false, 00:20:15.169 "ddgst": false 00:20:15.169 }, 00:20:15.169 "method": "bdev_nvme_attach_controller" 00:20:15.169 }' 00:20:15.169 [2024-07-24 23:05:47.435487] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:15.169 [2024-07-24 23:05:47.435537] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3239818 ] 00:20:15.169 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.169 [2024-07-24 23:05:47.507543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:15.169 [2024-07-24 23:05:47.545013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.169 [2024-07-24 23:05:47.545110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.169 [2024-07-24 23:05:47.545112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.429 [2024-07-24 23:05:47.734077] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:15.429 [2024-07-24 23:05:47.734116] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:15.429 I/O targets: 00:20:15.429 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:15.429 00:20:15.429 00:20:15.429 CUnit - A unit testing framework for C - Version 2.1-3 00:20:15.429 http://cunit.sourceforge.net/ 00:20:15.429 00:20:15.429 00:20:15.429 Suite: bdevio tests on: Nvme1n1 00:20:15.429 Test: blockdev write read block ...passed 00:20:15.429 Test: blockdev write zeroes read block ...passed 00:20:15.429 Test: blockdev write zeroes read no split ...passed 00:20:15.688 Test: blockdev write zeroes read split ...passed 00:20:15.688 Test: blockdev write zeroes read split partial ...passed 00:20:15.688 Test: blockdev reset ...[2024-07-24 23:05:47.949392] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:15.688 [2024-07-24 23:05:47.949443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf66e80 (9): Bad file descriptor 00:20:15.688 [2024-07-24 23:05:48.045518] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:15.688 passed 00:20:15.688 Test: blockdev write read 8 blocks ...passed 00:20:15.688 Test: blockdev write read size > 128k ...passed 00:20:15.688 Test: blockdev write read invalid size ...passed 00:20:15.688 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:15.688 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:15.688 Test: blockdev write read max offset ...passed 00:20:15.948 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:15.948 Test: blockdev writev readv 8 blocks ...passed 00:20:15.948 Test: blockdev writev readv 30 x 1block ...passed 00:20:15.948 Test: blockdev writev readv block ...passed 00:20:15.948 Test: blockdev writev readv size > 128k ...passed 00:20:15.948 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:15.948 Test: blockdev comparev and writev ...[2024-07-24 23:05:48.224319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.948 [2024-07-24 23:05:48.224350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.948 [2024-07-24 23:05:48.224366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.948 [2024-07-24 23:05:48.224377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:15.948 [2024-07-24 23:05:48.224699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.948 [2024-07-24 23:05:48.224712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:15.948 [2024-07-24 23:05:48.224730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.948 [2024-07-24 23:05:48.224740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:15.948 [2024-07-24 23:05:48.225043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.948 [2024-07-24 23:05:48.225055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:15.948 [2024-07-24 23:05:48.225070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.948 [2024-07-24 23:05:48.225080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:15.948 [2024-07-24 23:05:48.225394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.948 [2024-07-24 23:05:48.225407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:15.948 [2024-07-24 23:05:48.225421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.948 [2024-07-24 23:05:48.225431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:15.948 passed 00:20:15.948 Test: blockdev nvme passthru rw ...passed 00:20:15.948 Test: blockdev nvme passthru vendor specific ...[2024-07-24 23:05:48.308233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:15.948 [2024-07-24 23:05:48.308250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:15.948 [2024-07-24 23:05:48.308446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:15.948 [2024-07-24 23:05:48.308458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:15.948 [2024-07-24 23:05:48.308647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:15.948 [2024-07-24 23:05:48.308659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:15.948 [2024-07-24 23:05:48.308859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:15.948 [2024-07-24 23:05:48.308872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:15.948 passed 00:20:15.948 Test: blockdev nvme admin passthru ...passed 00:20:15.948 Test: blockdev copy ...passed 00:20:15.948 00:20:15.948 Run Summary: Type Total Ran Passed Failed Inactive 00:20:15.948 suites 1 1 n/a 0 0 00:20:15.948 tests 23 23 23 0 0 00:20:15.948 asserts 152 152 152 0 n/a 00:20:15.948 00:20:15.948 Elapsed time = 1.254 seconds 00:20:16.208 23:05:48 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:16.208 23:05:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:16.208 23:05:48 -- common/autotest_common.sh@10 -- # set +x 00:20:16.208 23:05:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:16.208 23:05:48 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:16.208 23:05:48 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:16.208 23:05:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:16.208 23:05:48 -- nvmf/common.sh@116 -- # sync 00:20:16.208 23:05:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:16.208 23:05:48 -- nvmf/common.sh@119 -- # set +e 00:20:16.208 23:05:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:16.208 23:05:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:16.208 rmmod nvme_tcp 00:20:16.208 rmmod nvme_fabrics 00:20:16.208 rmmod nvme_keyring 00:20:16.208 23:05:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:16.208 23:05:48 -- nvmf/common.sh@123 -- # set -e 00:20:16.208 23:05:48 -- nvmf/common.sh@124 -- # return 0 00:20:16.208 23:05:48 -- nvmf/common.sh@477 -- # '[' -n 3239760 ']' 00:20:16.208 23:05:48 -- nvmf/common.sh@478 -- # killprocess 3239760 00:20:16.208 23:05:48 -- common/autotest_common.sh@926 -- # '[' -z 3239760 ']' 00:20:16.208 23:05:48 -- common/autotest_common.sh@930 -- # kill -0 3239760 00:20:16.208 23:05:48 -- common/autotest_common.sh@931 -- # uname 00:20:16.208 23:05:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:16.208 23:05:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3239760 00:20:16.537 23:05:48 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:20:16.537 23:05:48 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:20:16.537 23:05:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3239760' 00:20:16.537 killing process with pid 3239760 00:20:16.537 23:05:48 -- common/autotest_common.sh@945 -- # kill 3239760 00:20:16.537 23:05:48 -- common/autotest_common.sh@950 -- # wait 3239760 00:20:16.537 23:05:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:16.537 23:05:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:16.537 23:05:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:16.537 23:05:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:16.537 23:05:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:16.537 23:05:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.537 23:05:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.537 23:05:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.075 23:05:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:19.075 00:20:19.075 real 0m11.200s 00:20:19.075 user 0m12.743s 00:20:19.075 sys 0m5.741s 00:20:19.075 23:05:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:19.075 23:05:50 -- common/autotest_common.sh@10 -- # set +x 00:20:19.075 ************************************ 00:20:19.075 END TEST nvmf_bdevio 00:20:19.075 ************************************ 00:20:19.075 23:05:50 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:20:19.075 23:05:50 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:19.075 23:05:50 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:20:19.075 23:05:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:19.075 23:05:50 -- common/autotest_common.sh@10 -- # set +x 00:20:19.075 ************************************ 00:20:19.075 START TEST nvmf_bdevio_no_huge 00:20:19.075 ************************************ 00:20:19.075 23:05:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:19.075 * Looking for test storage... 00:20:19.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:19.075 23:05:51 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:19.075 23:05:51 -- nvmf/common.sh@7 -- # uname -s 00:20:19.075 23:05:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.075 23:05:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.075 23:05:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.075 23:05:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.075 23:05:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.075 23:05:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.075 23:05:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.075 23:05:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.075 23:05:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.075 23:05:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.075 23:05:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:19.075 23:05:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:19.075 23:05:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.075 23:05:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.075 23:05:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:19.075 23:05:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:19.075 23:05:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.075 23:05:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.075 23:05:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.075 23:05:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.076 23:05:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.076 23:05:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.076 23:05:51 -- paths/export.sh@5 -- # export PATH 00:20:19.076 23:05:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.076 23:05:51 -- nvmf/common.sh@46 -- # : 0 00:20:19.076 23:05:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:19.076 23:05:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:19.076 23:05:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:19.076 23:05:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.076 23:05:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.076 23:05:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:19.076 23:05:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:19.076 23:05:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:19.076 23:05:51 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:19.076 23:05:51 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:19.076 23:05:51 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:19.076 23:05:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:19.076 23:05:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.076 23:05:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:19.076 23:05:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:19.076 23:05:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:19.076 23:05:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.076 23:05:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:19.076 23:05:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.076 23:05:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:19.076 23:05:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:19.076 23:05:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:19.076 23:05:51 -- common/autotest_common.sh@10 -- # set +x 00:20:25.650 23:05:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:25.650 23:05:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:25.650 23:05:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:25.650 23:05:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:25.650 23:05:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:25.650 23:05:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:25.650 23:05:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:25.650 23:05:57 -- nvmf/common.sh@294 -- # net_devs=() 00:20:25.650 23:05:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:25.650 23:05:57 -- nvmf/common.sh@295 -- # e810=() 00:20:25.650 23:05:57 -- nvmf/common.sh@295 -- # local -ga e810 00:20:25.650 23:05:57 -- nvmf/common.sh@296 -- # x722=() 00:20:25.650 23:05:57 -- nvmf/common.sh@296 -- # local -ga x722 00:20:25.650 23:05:57 -- nvmf/common.sh@297 -- # mlx=() 00:20:25.650 23:05:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:25.650 23:05:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:25.650 23:05:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:25.650 23:05:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:25.650 23:05:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:25.650 23:05:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:25.650 23:05:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:25.650 23:05:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:25.650 23:05:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:25.650 23:05:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:25.650 23:05:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:25.650 23:05:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:25.650 23:05:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:25.650 23:05:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:25.650 23:05:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:25.650 23:05:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:25.650 23:05:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:25.650 23:05:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:25.650 23:05:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:25.650 23:05:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:25.650 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:25.650 23:05:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:25.650 23:05:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:25.650 23:05:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.650 23:05:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.650 23:05:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:25.650 23:05:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:25.650 23:05:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:25.651 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:25.651 23:05:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:25.651 23:05:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:25.651 23:05:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.651 23:05:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.651 23:05:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:25.651 23:05:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:25.651 23:05:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:25.651 23:05:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:25.651 23:05:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:25.651 23:05:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.651 23:05:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:25.651 23:05:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.651 23:05:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:25.651 Found net devices under 0000:af:00.0: cvl_0_0 00:20:25.651 23:05:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.651 23:05:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:25.651 23:05:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.651 23:05:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:25.651 23:05:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.651 23:05:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:25.651 Found net devices under 0000:af:00.1: cvl_0_1 00:20:25.651 23:05:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.651 23:05:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:25.651 23:05:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:25.651 23:05:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:25.651 23:05:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:25.651 23:05:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:25.651 23:05:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.651 23:05:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:25.651 23:05:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:25.651 23:05:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:25.651 23:05:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:25.651 23:05:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:25.651 23:05:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:25.651 23:05:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:25.651 23:05:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.651 23:05:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:25.651 23:05:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:25.651 23:05:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:25.651 23:05:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:25.651 23:05:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:25.651 23:05:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:25.651 23:05:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:25.651 23:05:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:25.651 23:05:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:25.651 23:05:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:25.651 23:05:57 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:25.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:25.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:20:25.651 00:20:25.651 --- 10.0.0.2 ping statistics --- 00:20:25.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.651 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:20:25.651 23:05:57 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:25.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:25.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:20:25.651 00:20:25.651 --- 10.0.0.1 ping statistics --- 00:20:25.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.651 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:20:25.651 23:05:57 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:25.651 23:05:57 -- nvmf/common.sh@410 -- # return 0 00:20:25.651 23:05:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:25.651 23:05:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.651 23:05:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:25.651 23:05:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:25.651 23:05:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.651 23:05:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:25.651 23:05:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:25.651 23:05:57 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:25.651 23:05:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:25.651 23:05:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:25.651 23:05:57 -- common/autotest_common.sh@10 -- # set +x 00:20:25.651 23:05:57 -- nvmf/common.sh@469 -- # nvmfpid=3243752 00:20:25.651 23:05:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:25.651 23:05:57 -- nvmf/common.sh@470 -- # waitforlisten 3243752 00:20:25.651 23:05:57 -- common/autotest_common.sh@819 -- # '[' -z 3243752 ']' 00:20:25.651 23:05:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.651 23:05:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:25.651 23:05:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.651 23:05:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:25.651 23:05:57 -- common/autotest_common.sh@10 -- # set +x 00:20:25.651 [2024-07-24 23:05:57.486531] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:25.651 [2024-07-24 23:05:57.486581] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:25.651 [2024-07-24 23:05:57.565444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:25.651 [2024-07-24 23:05:57.642643] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:25.651 [2024-07-24 23:05:57.642754] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.651 [2024-07-24 23:05:57.642764] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.651 [2024-07-24 23:05:57.642774] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.651 [2024-07-24 23:05:57.642884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:25.651 [2024-07-24 23:05:57.642998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:25.651 [2024-07-24 23:05:57.643109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:25.651 [2024-07-24 23:05:57.643108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:25.910 23:05:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:25.910 23:05:58 -- common/autotest_common.sh@852 -- # return 0 00:20:25.910 23:05:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:25.910 23:05:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:25.910 23:05:58 -- common/autotest_common.sh@10 -- # set +x 00:20:25.910 23:05:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.910 23:05:58 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:25.910 23:05:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:25.910 23:05:58 -- common/autotest_common.sh@10 -- # set +x 00:20:25.910 [2024-07-24 23:05:58.332139] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.170 23:05:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:26.170 23:05:58 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:26.170 23:05:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:26.170 23:05:58 -- common/autotest_common.sh@10 -- # set +x 00:20:26.170 Malloc0 00:20:26.170 23:05:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:26.170 23:05:58 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:26.170 23:05:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:26.170 23:05:58 -- common/autotest_common.sh@10 -- # set +x 00:20:26.170 23:05:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:26.170 23:05:58 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:26.170 23:05:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:26.170 23:05:58 -- common/autotest_common.sh@10 -- # set +x 00:20:26.170 23:05:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:26.170 23:05:58 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:26.170 23:05:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:26.170 23:05:58 -- common/autotest_common.sh@10 -- # set +x 00:20:26.170 [2024-07-24 23:05:58.376800] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.170 23:05:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:26.170 23:05:58 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:26.171 23:05:58 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:26.171 23:05:58 -- nvmf/common.sh@520 -- # config=() 00:20:26.171 23:05:58 -- nvmf/common.sh@520 -- # local subsystem config 00:20:26.171 23:05:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:26.171 23:05:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:26.171 { 00:20:26.171 "params": { 00:20:26.171 "name": "Nvme$subsystem", 00:20:26.171 "trtype": "$TEST_TRANSPORT", 00:20:26.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.171 "adrfam": "ipv4", 00:20:26.171 "trsvcid": "$NVMF_PORT", 00:20:26.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.171 "hdgst": ${hdgst:-false}, 00:20:26.171 "ddgst": ${ddgst:-false} 00:20:26.171 }, 00:20:26.171 "method": "bdev_nvme_attach_controller" 00:20:26.171 } 00:20:26.171 EOF 00:20:26.171 )") 00:20:26.171 23:05:58 -- nvmf/common.sh@542 -- # cat 00:20:26.171 23:05:58 -- nvmf/common.sh@544 -- # jq . 00:20:26.171 23:05:58 -- nvmf/common.sh@545 -- # IFS=, 00:20:26.171 23:05:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:26.171 "params": { 00:20:26.171 "name": "Nvme1", 00:20:26.171 "trtype": "tcp", 00:20:26.171 "traddr": "10.0.0.2", 00:20:26.171 "adrfam": "ipv4", 00:20:26.171 "trsvcid": "4420", 00:20:26.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:26.171 "hdgst": false, 00:20:26.171 "ddgst": false 00:20:26.171 }, 00:20:26.171 "method": "bdev_nvme_attach_controller" 00:20:26.171 }' 00:20:26.171 [2024-07-24 23:05:58.428026] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:26.171 [2024-07-24 23:05:58.428070] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3243802 ] 00:20:26.171 [2024-07-24 23:05:58.505311] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:26.171 [2024-07-24 23:05:58.584749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.171 [2024-07-24 23:05:58.584845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.171 [2024-07-24 23:05:58.584847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.743 [2024-07-24 23:05:58.896940] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:26.743 [2024-07-24 23:05:58.896975] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:26.743 I/O targets: 00:20:26.743 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:26.743 00:20:26.743 00:20:26.743 CUnit - A unit testing framework for C - Version 2.1-3 00:20:26.743 http://cunit.sourceforge.net/ 00:20:26.743 00:20:26.743 00:20:26.743 Suite: bdevio tests on: Nvme1n1 00:20:26.743 Test: blockdev write read block ...passed 00:20:26.743 Test: blockdev write zeroes read block ...passed 00:20:26.743 Test: blockdev write zeroes read no split ...passed 00:20:26.743 Test: blockdev write zeroes read split ...passed 00:20:26.743 Test: blockdev write zeroes read split partial ...passed 00:20:26.743 Test: blockdev reset ...[2024-07-24 23:05:59.117000] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:26.743 [2024-07-24 23:05:59.117051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2026cd0 (9): Bad file descriptor 00:20:26.743 [2024-07-24 23:05:59.135092] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:26.743 passed 00:20:27.002 Test: blockdev write read 8 blocks ...passed 00:20:27.002 Test: blockdev write read size > 128k ...passed 00:20:27.002 Test: blockdev write read invalid size ...passed 00:20:27.002 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:27.002 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:27.002 Test: blockdev write read max offset ...passed 00:20:27.002 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:27.002 Test: blockdev writev readv 8 blocks ...passed 00:20:27.002 Test: blockdev writev readv 30 x 1block ...passed 00:20:27.002 Test: blockdev writev readv block ...passed 00:20:27.002 Test: blockdev writev readv size > 128k ...passed 00:20:27.002 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:27.262 Test: blockdev comparev and writev ...[2024-07-24 23:05:59.433341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:27.262 [2024-07-24 23:05:59.433372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:27.262 [2024-07-24 23:05:59.433390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:27.262 [2024-07-24 23:05:59.433402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.262 [2024-07-24 23:05:59.433731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:27.262 [2024-07-24 23:05:59.433745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:27.262 [2024-07-24 23:05:59.433759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:27.262 [2024-07-24 23:05:59.433769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:27.262 [2024-07-24 23:05:59.434092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:27.262 [2024-07-24 23:05:59.434104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:27.262 [2024-07-24 23:05:59.434117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:27.262 [2024-07-24 23:05:59.434127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:27.262 [2024-07-24 23:05:59.434447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:27.262 [2024-07-24 23:05:59.434460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:27.262 [2024-07-24 23:05:59.434474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:27.262 [2024-07-24 23:05:59.434484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:27.262 passed 00:20:27.262 Test: blockdev nvme passthru rw ...passed 00:20:27.262 Test: blockdev nvme passthru vendor specific ...[2024-07-24 23:05:59.516268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:27.262 [2024-07-24 23:05:59.516286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:27.262 [2024-07-24 23:05:59.516479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:27.262 [2024-07-24 23:05:59.516491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:27.262 [2024-07-24 23:05:59.516689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:27.262 [2024-07-24 23:05:59.516703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:27.262 [2024-07-24 23:05:59.516901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:27.262 [2024-07-24 23:05:59.516914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:27.262 passed 00:20:27.262 Test: blockdev nvme admin passthru ...passed 00:20:27.262 Test: blockdev copy ...passed 00:20:27.262 00:20:27.262 Run Summary: Type Total Ran Passed Failed Inactive 00:20:27.262 suites 1 1 n/a 0 0 00:20:27.262 tests 23 23 23 0 0 00:20:27.262 asserts 152 152 152 0 n/a 00:20:27.262 00:20:27.262 Elapsed time = 1.347 seconds 00:20:27.522 23:05:59 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:27.522 23:05:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.522 23:05:59 -- common/autotest_common.sh@10 -- # set +x 00:20:27.522 23:05:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.522 23:05:59 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:27.522 23:05:59 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:27.522 23:05:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:27.522 23:05:59 -- nvmf/common.sh@116 -- # sync 00:20:27.522 23:05:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:27.522 23:05:59 -- nvmf/common.sh@119 -- # set +e 00:20:27.522 23:05:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:27.522 23:05:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:27.522 rmmod nvme_tcp 00:20:27.522 rmmod nvme_fabrics 00:20:27.522 rmmod nvme_keyring 00:20:27.782 23:05:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:27.782 23:05:59 -- nvmf/common.sh@123 -- # set -e 00:20:27.782 23:05:59 -- nvmf/common.sh@124 -- # return 0 00:20:27.782 23:05:59 -- nvmf/common.sh@477 -- # '[' -n 3243752 ']' 00:20:27.782 23:05:59 -- nvmf/common.sh@478 -- # killprocess 3243752 00:20:27.782 23:05:59 -- common/autotest_common.sh@926 -- # '[' -z 3243752 ']' 00:20:27.782 23:05:59 -- common/autotest_common.sh@930 -- # kill -0 3243752 00:20:27.782 23:05:59 -- common/autotest_common.sh@931 -- # uname 00:20:27.782 23:05:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:27.782 23:05:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3243752 00:20:27.782 23:06:00 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:20:27.782 23:06:00 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:20:27.782 23:06:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3243752' 00:20:27.782 killing process with pid 3243752 00:20:27.782 23:06:00 -- common/autotest_common.sh@945 -- # kill 3243752 00:20:27.782 23:06:00 -- common/autotest_common.sh@950 -- # wait 3243752 00:20:28.041 23:06:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:28.041 23:06:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:28.041 23:06:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:28.041 23:06:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:28.041 23:06:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:28.041 23:06:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.041 23:06:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:28.041 23:06:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.576 23:06:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:30.576 00:20:30.576 real 0m11.472s 00:20:30.576 user 0m14.898s 00:20:30.576 sys 0m5.978s 00:20:30.576 23:06:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:30.576 23:06:02 -- common/autotest_common.sh@10 -- # set +x 00:20:30.576 ************************************ 00:20:30.576 END TEST nvmf_bdevio_no_huge 00:20:30.576 ************************************ 00:20:30.576 23:06:02 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:30.576 23:06:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:30.576 23:06:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:30.576 23:06:02 -- common/autotest_common.sh@10 -- # set +x 00:20:30.576 ************************************ 00:20:30.576 START TEST nvmf_tls 00:20:30.576 ************************************ 00:20:30.576 23:06:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:30.576 * Looking for test storage... 00:20:30.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:30.576 23:06:02 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.576 23:06:02 -- nvmf/common.sh@7 -- # uname -s 00:20:30.576 23:06:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.576 23:06:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.576 23:06:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.576 23:06:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.576 23:06:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.576 23:06:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.576 23:06:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.576 23:06:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.576 23:06:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.576 23:06:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.576 23:06:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:30.576 23:06:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:30.576 23:06:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.576 23:06:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.576 23:06:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.576 23:06:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.576 23:06:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.576 23:06:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.576 23:06:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.576 23:06:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.576 23:06:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.576 23:06:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.576 23:06:02 -- paths/export.sh@5 -- # export PATH 00:20:30.576 23:06:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.576 23:06:02 -- nvmf/common.sh@46 -- # : 0 00:20:30.576 23:06:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:30.576 23:06:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:30.576 23:06:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:30.576 23:06:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.576 23:06:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.576 23:06:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:30.576 23:06:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:30.576 23:06:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:30.576 23:06:02 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:30.576 23:06:02 -- target/tls.sh@71 -- # nvmftestinit 00:20:30.576 23:06:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:30.576 23:06:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.576 23:06:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:30.576 23:06:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:30.576 23:06:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:30.576 23:06:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.576 23:06:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:30.576 23:06:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.576 23:06:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:30.576 23:06:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:30.576 23:06:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:30.576 23:06:02 -- common/autotest_common.sh@10 -- # set +x 00:20:37.145 23:06:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:37.145 23:06:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:37.145 23:06:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:37.145 23:06:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:37.145 23:06:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:37.145 23:06:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:37.145 23:06:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:37.145 23:06:09 -- nvmf/common.sh@294 -- # net_devs=() 00:20:37.145 23:06:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:37.145 23:06:09 -- nvmf/common.sh@295 -- # e810=() 00:20:37.145 23:06:09 -- nvmf/common.sh@295 -- # local -ga e810 00:20:37.145 23:06:09 -- nvmf/common.sh@296 -- # x722=() 00:20:37.145 23:06:09 -- nvmf/common.sh@296 -- # local -ga x722 00:20:37.145 23:06:09 -- nvmf/common.sh@297 -- # mlx=() 00:20:37.145 23:06:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:37.145 23:06:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.145 23:06:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.145 23:06:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.145 23:06:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.145 23:06:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.145 23:06:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.145 23:06:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.145 23:06:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.145 23:06:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.145 23:06:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.145 23:06:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.145 23:06:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:37.145 23:06:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:37.145 23:06:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:37.145 23:06:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:37.145 23:06:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:37.146 23:06:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:37.146 23:06:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:37.146 23:06:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:37.146 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:37.146 23:06:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:37.146 23:06:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:37.146 23:06:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.146 23:06:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.146 23:06:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:37.146 23:06:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:37.146 23:06:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:37.146 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:37.146 23:06:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:37.146 23:06:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:37.146 23:06:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.146 23:06:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.146 23:06:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:37.146 23:06:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:37.146 23:06:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:37.146 23:06:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:37.146 23:06:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:37.146 23:06:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.146 23:06:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:37.146 23:06:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.146 23:06:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:37.146 Found net devices under 0000:af:00.0: cvl_0_0 00:20:37.146 23:06:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.146 23:06:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:37.146 23:06:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.146 23:06:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:37.146 23:06:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.146 23:06:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:37.146 Found net devices under 0000:af:00.1: cvl_0_1 00:20:37.146 23:06:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.146 23:06:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:37.146 23:06:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:37.146 23:06:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:37.146 23:06:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:37.146 23:06:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:37.146 23:06:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.146 23:06:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.146 23:06:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.146 23:06:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:37.146 23:06:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:37.146 23:06:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:37.146 23:06:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:37.146 23:06:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:37.146 23:06:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.146 23:06:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:37.146 23:06:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:37.146 23:06:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:37.146 23:06:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:37.146 23:06:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:37.146 23:06:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:37.146 23:06:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:37.146 23:06:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:37.146 23:06:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:37.146 23:06:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:37.146 23:06:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:37.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:20:37.146 00:20:37.146 --- 10.0.0.2 ping statistics --- 00:20:37.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.146 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:20:37.146 23:06:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:37.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:20:37.146 00:20:37.146 --- 10.0.0.1 ping statistics --- 00:20:37.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.146 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:20:37.146 23:06:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.146 23:06:09 -- nvmf/common.sh@410 -- # return 0 00:20:37.146 23:06:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:37.146 23:06:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.146 23:06:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:37.146 23:06:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:37.146 23:06:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.146 23:06:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:37.146 23:06:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:37.146 23:06:09 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:37.146 23:06:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:37.146 23:06:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:37.146 23:06:09 -- common/autotest_common.sh@10 -- # set +x 00:20:37.146 23:06:09 -- nvmf/common.sh@469 -- # nvmfpid=3248322 00:20:37.146 23:06:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:37.146 23:06:09 -- nvmf/common.sh@470 -- # waitforlisten 3248322 00:20:37.146 23:06:09 -- common/autotest_common.sh@819 -- # '[' -z 3248322 ']' 00:20:37.146 23:06:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.146 23:06:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:37.146 23:06:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.146 23:06:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:37.146 23:06:09 -- common/autotest_common.sh@10 -- # set +x 00:20:37.146 [2024-07-24 23:06:09.444198] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:37.146 [2024-07-24 23:06:09.444248] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.146 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.146 [2024-07-24 23:06:09.524011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.146 [2024-07-24 23:06:09.559804] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:37.146 [2024-07-24 23:06:09.559917] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.146 [2024-07-24 23:06:09.559927] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.146 [2024-07-24 23:06:09.559935] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.146 [2024-07-24 23:06:09.559957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.083 23:06:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:38.083 23:06:10 -- common/autotest_common.sh@852 -- # return 0 00:20:38.083 23:06:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:38.083 23:06:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:38.083 23:06:10 -- common/autotest_common.sh@10 -- # set +x 00:20:38.083 23:06:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.083 23:06:10 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:20:38.083 23:06:10 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:38.083 true 00:20:38.083 23:06:10 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:38.083 23:06:10 -- target/tls.sh@82 -- # jq -r .tls_version 00:20:38.342 23:06:10 -- target/tls.sh@82 -- # version=0 00:20:38.342 23:06:10 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:20:38.342 23:06:10 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:38.601 23:06:10 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:38.601 23:06:10 -- target/tls.sh@90 -- # jq -r .tls_version 00:20:38.601 23:06:10 -- target/tls.sh@90 -- # version=13 00:20:38.601 23:06:10 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:20:38.601 23:06:10 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:38.860 23:06:11 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:38.860 23:06:11 -- target/tls.sh@98 -- # jq -r .tls_version 00:20:38.860 23:06:11 -- target/tls.sh@98 -- # version=7 00:20:38.860 23:06:11 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:20:38.860 23:06:11 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:38.860 23:06:11 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:39.120 23:06:11 -- target/tls.sh@105 -- # ktls=false 00:20:39.120 23:06:11 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:20:39.120 23:06:11 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:39.379 23:06:11 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:39.379 23:06:11 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:39.379 23:06:11 -- target/tls.sh@113 -- # ktls=true 00:20:39.379 23:06:11 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:20:39.379 23:06:11 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:39.638 23:06:11 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:39.638 23:06:11 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:20:39.898 23:06:12 -- target/tls.sh@121 -- # ktls=false 00:20:39.898 23:06:12 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:20:39.898 23:06:12 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:20:39.898 23:06:12 -- target/tls.sh@49 -- # local key hash crc 00:20:39.898 23:06:12 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:20:39.898 23:06:12 -- target/tls.sh@51 -- # hash=01 00:20:39.898 23:06:12 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:20:39.898 23:06:12 -- target/tls.sh@52 -- # tail -c8 00:20:39.898 23:06:12 -- target/tls.sh@52 -- # gzip -1 -c 00:20:39.898 23:06:12 -- target/tls.sh@52 -- # head -c 4 00:20:39.898 23:06:12 -- target/tls.sh@52 -- # crc='p$H�' 00:20:39.898 23:06:12 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:39.898 23:06:12 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:20:39.898 23:06:12 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:39.898 23:06:12 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:39.898 23:06:12 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:20:39.898 23:06:12 -- target/tls.sh@49 -- # local key hash crc 00:20:39.898 23:06:12 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:20:39.898 23:06:12 -- target/tls.sh@51 -- # hash=01 00:20:39.898 23:06:12 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:20:39.898 23:06:12 -- target/tls.sh@52 -- # head -c 4 00:20:39.898 23:06:12 -- target/tls.sh@52 -- # gzip -1 -c 00:20:39.898 23:06:12 -- target/tls.sh@52 -- # tail -c8 00:20:39.898 23:06:12 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:20:39.898 23:06:12 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:39.898 23:06:12 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:20:39.898 23:06:12 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:39.898 23:06:12 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:39.898 23:06:12 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:39.898 23:06:12 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:39.898 23:06:12 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:39.898 23:06:12 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:39.898 23:06:12 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:39.898 23:06:12 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:39.898 23:06:12 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:39.898 23:06:12 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:40.158 23:06:12 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:40.158 23:06:12 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:40.158 23:06:12 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:40.417 [2024-07-24 23:06:12.695573] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.417 23:06:12 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:40.676 23:06:12 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:40.676 [2024-07-24 23:06:13.040455] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:40.676 [2024-07-24 23:06:13.040678] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.676 23:06:13 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:40.935 malloc0 00:20:40.935 23:06:13 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:41.194 23:06:13 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:41.194 23:06:13 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:41.194 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.416 Initializing NVMe Controllers 00:20:53.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:53.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:53.416 Initialization complete. Launching workers. 00:20:53.416 ======================================================== 00:20:53.417 Latency(us) 00:20:53.417 Device Information : IOPS MiB/s Average min max 00:20:53.417 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16989.37 66.36 3767.45 816.81 5236.52 00:20:53.417 ======================================================== 00:20:53.417 Total : 16989.37 66.36 3767.45 816.81 5236.52 00:20:53.417 00:20:53.417 23:06:23 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:53.417 23:06:23 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:53.417 23:06:23 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:53.417 23:06:23 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:53.417 23:06:23 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:53.417 23:06:23 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:53.417 23:06:23 -- target/tls.sh@28 -- # bdevperf_pid=3250794 00:20:53.417 23:06:23 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:53.417 23:06:23 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:53.417 23:06:23 -- target/tls.sh@31 -- # waitforlisten 3250794 /var/tmp/bdevperf.sock 00:20:53.417 23:06:23 -- common/autotest_common.sh@819 -- # '[' -z 3250794 ']' 00:20:53.417 23:06:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.417 23:06:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:53.417 23:06:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.417 23:06:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:53.417 23:06:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.417 [2024-07-24 23:06:23.675209] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:53.417 [2024-07-24 23:06:23.675264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3250794 ] 00:20:53.417 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.417 [2024-07-24 23:06:23.745918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.417 [2024-07-24 23:06:23.781466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.417 23:06:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:53.417 23:06:24 -- common/autotest_common.sh@852 -- # return 0 00:20:53.417 23:06:24 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:53.417 [2024-07-24 23:06:24.606905] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:53.417 TLSTESTn1 00:20:53.417 23:06:24 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:53.417 Running I/O for 10 seconds... 00:21:03.443 00:21:03.443 Latency(us) 00:21:03.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.443 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:03.443 Verification LBA range: start 0x0 length 0x2000 00:21:03.443 TLSTESTn1 : 10.02 3794.47 14.82 0.00 0.00 33704.04 5111.81 62495.13 00:21:03.443 =================================================================================================================== 00:21:03.443 Total : 3794.47 14.82 0.00 0.00 33704.04 5111.81 62495.13 00:21:03.443 0 00:21:03.443 23:06:34 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:03.443 23:06:34 -- target/tls.sh@45 -- # killprocess 3250794 00:21:03.443 23:06:34 -- common/autotest_common.sh@926 -- # '[' -z 3250794 ']' 00:21:03.443 23:06:34 -- common/autotest_common.sh@930 -- # kill -0 3250794 00:21:03.443 23:06:34 -- common/autotest_common.sh@931 -- # uname 00:21:03.443 23:06:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:03.443 23:06:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3250794 00:21:03.443 23:06:34 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:03.443 23:06:34 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:03.443 23:06:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3250794' 00:21:03.443 killing process with pid 3250794 00:21:03.443 23:06:34 -- common/autotest_common.sh@945 -- # kill 3250794 00:21:03.443 Received shutdown signal, test time was about 10.000000 seconds 00:21:03.443 00:21:03.443 Latency(us) 00:21:03.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.443 =================================================================================================================== 00:21:03.443 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:03.443 23:06:34 -- common/autotest_common.sh@950 -- # wait 3250794 00:21:03.443 23:06:35 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:03.443 23:06:35 -- common/autotest_common.sh@640 -- # local es=0 00:21:03.443 23:06:35 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:03.443 23:06:35 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:03.443 23:06:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:03.444 23:06:35 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:03.444 23:06:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:03.444 23:06:35 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:03.444 23:06:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:03.444 23:06:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:03.444 23:06:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:03.444 23:06:35 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:21:03.444 23:06:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:03.444 23:06:35 -- target/tls.sh@28 -- # bdevperf_pid=3252825 00:21:03.444 23:06:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:03.444 23:06:35 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:03.444 23:06:35 -- target/tls.sh@31 -- # waitforlisten 3252825 /var/tmp/bdevperf.sock 00:21:03.444 23:06:35 -- common/autotest_common.sh@819 -- # '[' -z 3252825 ']' 00:21:03.444 23:06:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.444 23:06:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:03.444 23:06:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.444 23:06:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:03.444 23:06:35 -- common/autotest_common.sh@10 -- # set +x 00:21:03.444 [2024-07-24 23:06:35.108416] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:03.444 [2024-07-24 23:06:35.108474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3252825 ] 00:21:03.444 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.444 [2024-07-24 23:06:35.179408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.444 [2024-07-24 23:06:35.214580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.703 23:06:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:03.703 23:06:35 -- common/autotest_common.sh@852 -- # return 0 00:21:03.703 23:06:35 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:03.703 [2024-07-24 23:06:36.028343] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:03.703 [2024-07-24 23:06:36.036877] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:03.703 [2024-07-24 23:06:36.037693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8b7a0 (107): Transport endpoint is not connected 00:21:03.703 [2024-07-24 23:06:36.038686] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8b7a0 (9): Bad file descriptor 00:21:03.703 [2024-07-24 23:06:36.039688] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.703 [2024-07-24 23:06:36.039701] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:03.703 [2024-07-24 23:06:36.039711] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.703 request: 00:21:03.703 { 00:21:03.703 "name": "TLSTEST", 00:21:03.703 "trtype": "tcp", 00:21:03.703 "traddr": "10.0.0.2", 00:21:03.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:03.703 "adrfam": "ipv4", 00:21:03.703 "trsvcid": "4420", 00:21:03.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.703 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:21:03.703 "method": "bdev_nvme_attach_controller", 00:21:03.703 "req_id": 1 00:21:03.703 } 00:21:03.703 Got JSON-RPC error response 00:21:03.703 response: 00:21:03.703 { 00:21:03.703 "code": -32602, 00:21:03.703 "message": "Invalid parameters" 00:21:03.703 } 00:21:03.703 23:06:36 -- target/tls.sh@36 -- # killprocess 3252825 00:21:03.703 23:06:36 -- common/autotest_common.sh@926 -- # '[' -z 3252825 ']' 00:21:03.703 23:06:36 -- common/autotest_common.sh@930 -- # kill -0 3252825 00:21:03.703 23:06:36 -- common/autotest_common.sh@931 -- # uname 00:21:03.703 23:06:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:03.703 23:06:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3252825 00:21:03.703 23:06:36 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:03.703 23:06:36 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:03.703 23:06:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3252825' 00:21:03.703 killing process with pid 3252825 00:21:03.703 23:06:36 -- common/autotest_common.sh@945 -- # kill 3252825 00:21:03.703 Received shutdown signal, test time was about 10.000000 seconds 00:21:03.703 00:21:03.703 Latency(us) 00:21:03.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.703 =================================================================================================================== 00:21:03.703 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:03.703 23:06:36 -- common/autotest_common.sh@950 -- # wait 3252825 00:21:03.962 23:06:36 -- target/tls.sh@37 -- # return 1 00:21:03.962 23:06:36 -- common/autotest_common.sh@643 -- # es=1 00:21:03.962 23:06:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:03.962 23:06:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:03.962 23:06:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:03.962 23:06:36 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:03.962 23:06:36 -- common/autotest_common.sh@640 -- # local es=0 00:21:03.962 23:06:36 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:03.962 23:06:36 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:03.962 23:06:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:03.962 23:06:36 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:03.963 23:06:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:03.963 23:06:36 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:03.963 23:06:36 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:03.963 23:06:36 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:03.963 23:06:36 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:03.963 23:06:36 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:21:03.963 23:06:36 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:03.963 23:06:36 -- target/tls.sh@28 -- # bdevperf_pid=3252948 00:21:03.963 23:06:36 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:03.963 23:06:36 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:03.963 23:06:36 -- target/tls.sh@31 -- # waitforlisten 3252948 /var/tmp/bdevperf.sock 00:21:03.963 23:06:36 -- common/autotest_common.sh@819 -- # '[' -z 3252948 ']' 00:21:03.963 23:06:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.963 23:06:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:03.963 23:06:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.963 23:06:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:03.963 23:06:36 -- common/autotest_common.sh@10 -- # set +x 00:21:03.963 [2024-07-24 23:06:36.325020] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:03.963 [2024-07-24 23:06:36.325073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3252948 ] 00:21:03.963 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.221 [2024-07-24 23:06:36.395279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.221 [2024-07-24 23:06:36.430742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.788 23:06:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:04.788 23:06:37 -- common/autotest_common.sh@852 -- # return 0 00:21:04.788 23:06:37 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:05.047 [2024-07-24 23:06:37.259742] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:05.047 [2024-07-24 23:06:37.269905] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:05.047 [2024-07-24 23:06:37.269931] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:05.047 [2024-07-24 23:06:37.269958] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:05.047 [2024-07-24 23:06:37.271105] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19787a0 (107): Transport endpoint is not connected 00:21:05.047 [2024-07-24 23:06:37.272098] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19787a0 (9): Bad file descriptor 00:21:05.047 [2024-07-24 23:06:37.273099] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:05.047 [2024-07-24 23:06:37.273112] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:05.047 [2024-07-24 23:06:37.273122] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:05.047 request: 00:21:05.047 { 00:21:05.047 "name": "TLSTEST", 00:21:05.047 "trtype": "tcp", 00:21:05.047 "traddr": "10.0.0.2", 00:21:05.047 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:05.047 "adrfam": "ipv4", 00:21:05.047 "trsvcid": "4420", 00:21:05.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.047 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:21:05.047 "method": "bdev_nvme_attach_controller", 00:21:05.047 "req_id": 1 00:21:05.047 } 00:21:05.047 Got JSON-RPC error response 00:21:05.047 response: 00:21:05.047 { 00:21:05.047 "code": -32602, 00:21:05.047 "message": "Invalid parameters" 00:21:05.047 } 00:21:05.047 23:06:37 -- target/tls.sh@36 -- # killprocess 3252948 00:21:05.047 23:06:37 -- common/autotest_common.sh@926 -- # '[' -z 3252948 ']' 00:21:05.047 23:06:37 -- common/autotest_common.sh@930 -- # kill -0 3252948 00:21:05.047 23:06:37 -- common/autotest_common.sh@931 -- # uname 00:21:05.047 23:06:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:05.047 23:06:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3252948 00:21:05.047 23:06:37 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:05.047 23:06:37 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:05.047 23:06:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3252948' 00:21:05.047 killing process with pid 3252948 00:21:05.047 23:06:37 -- common/autotest_common.sh@945 -- # kill 3252948 00:21:05.047 Received shutdown signal, test time was about 10.000000 seconds 00:21:05.047 00:21:05.047 Latency(us) 00:21:05.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.047 =================================================================================================================== 00:21:05.047 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:05.047 23:06:37 -- common/autotest_common.sh@950 -- # wait 3252948 00:21:05.307 23:06:37 -- target/tls.sh@37 -- # return 1 00:21:05.307 23:06:37 -- common/autotest_common.sh@643 -- # es=1 00:21:05.307 23:06:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:05.307 23:06:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:05.307 23:06:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:05.307 23:06:37 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:05.307 23:06:37 -- common/autotest_common.sh@640 -- # local es=0 00:21:05.307 23:06:37 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:05.307 23:06:37 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:05.307 23:06:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:05.307 23:06:37 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:05.307 23:06:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:05.307 23:06:37 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:05.307 23:06:37 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:05.307 23:06:37 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:05.307 23:06:37 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:05.307 23:06:37 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:21:05.307 23:06:37 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:05.307 23:06:37 -- target/tls.sh@28 -- # bdevperf_pid=3253203 00:21:05.307 23:06:37 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:05.307 23:06:37 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:05.307 23:06:37 -- target/tls.sh@31 -- # waitforlisten 3253203 /var/tmp/bdevperf.sock 00:21:05.307 23:06:37 -- common/autotest_common.sh@819 -- # '[' -z 3253203 ']' 00:21:05.307 23:06:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.307 23:06:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:05.307 23:06:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.307 23:06:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:05.307 23:06:37 -- common/autotest_common.sh@10 -- # set +x 00:21:05.307 [2024-07-24 23:06:37.559343] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:05.307 [2024-07-24 23:06:37.559396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3253203 ] 00:21:05.307 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.307 [2024-07-24 23:06:37.626982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.307 [2024-07-24 23:06:37.659128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.244 23:06:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:06.244 23:06:38 -- common/autotest_common.sh@852 -- # return 0 00:21:06.244 23:06:38 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:06.244 [2024-07-24 23:06:38.504496] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:06.244 [2024-07-24 23:06:38.513029] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:06.244 [2024-07-24 23:06:38.513055] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:06.244 [2024-07-24 23:06:38.513082] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:06.244 [2024-07-24 23:06:38.513827] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16777a0 (107): Transport endpoint is not connected 00:21:06.244 [2024-07-24 23:06:38.514820] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16777a0 (9): Bad file descriptor 00:21:06.244 [2024-07-24 23:06:38.515821] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:06.244 [2024-07-24 23:06:38.515833] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:06.244 [2024-07-24 23:06:38.515843] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:06.244 request: 00:21:06.244 { 00:21:06.244 "name": "TLSTEST", 00:21:06.244 "trtype": "tcp", 00:21:06.244 "traddr": "10.0.0.2", 00:21:06.244 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:06.244 "adrfam": "ipv4", 00:21:06.244 "trsvcid": "4420", 00:21:06.244 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:06.244 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:21:06.244 "method": "bdev_nvme_attach_controller", 00:21:06.244 "req_id": 1 00:21:06.244 } 00:21:06.244 Got JSON-RPC error response 00:21:06.244 response: 00:21:06.244 { 00:21:06.244 "code": -32602, 00:21:06.244 "message": "Invalid parameters" 00:21:06.244 } 00:21:06.244 23:06:38 -- target/tls.sh@36 -- # killprocess 3253203 00:21:06.244 23:06:38 -- common/autotest_common.sh@926 -- # '[' -z 3253203 ']' 00:21:06.244 23:06:38 -- common/autotest_common.sh@930 -- # kill -0 3253203 00:21:06.244 23:06:38 -- common/autotest_common.sh@931 -- # uname 00:21:06.244 23:06:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:06.244 23:06:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3253203 00:21:06.244 23:06:38 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:06.244 23:06:38 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:06.244 23:06:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3253203' 00:21:06.244 killing process with pid 3253203 00:21:06.244 23:06:38 -- common/autotest_common.sh@945 -- # kill 3253203 00:21:06.244 Received shutdown signal, test time was about 10.000000 seconds 00:21:06.244 00:21:06.244 Latency(us) 00:21:06.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.244 =================================================================================================================== 00:21:06.244 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:06.244 23:06:38 -- common/autotest_common.sh@950 -- # wait 3253203 00:21:06.504 23:06:38 -- target/tls.sh@37 -- # return 1 00:21:06.504 23:06:38 -- common/autotest_common.sh@643 -- # es=1 00:21:06.504 23:06:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:06.504 23:06:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:06.504 23:06:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:06.504 23:06:38 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:06.504 23:06:38 -- common/autotest_common.sh@640 -- # local es=0 00:21:06.504 23:06:38 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:06.504 23:06:38 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:06.504 23:06:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:06.504 23:06:38 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:06.504 23:06:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:06.504 23:06:38 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:06.504 23:06:38 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:06.504 23:06:38 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:06.504 23:06:38 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:06.504 23:06:38 -- target/tls.sh@23 -- # psk= 00:21:06.504 23:06:38 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:06.504 23:06:38 -- target/tls.sh@28 -- # bdevperf_pid=3253474 00:21:06.504 23:06:38 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:06.504 23:06:38 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:06.504 23:06:38 -- target/tls.sh@31 -- # waitforlisten 3253474 /var/tmp/bdevperf.sock 00:21:06.504 23:06:38 -- common/autotest_common.sh@819 -- # '[' -z 3253474 ']' 00:21:06.504 23:06:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:06.504 23:06:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:06.504 23:06:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:06.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:06.504 23:06:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:06.504 23:06:38 -- common/autotest_common.sh@10 -- # set +x 00:21:06.504 [2024-07-24 23:06:38.802085] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:06.504 [2024-07-24 23:06:38.802137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3253474 ] 00:21:06.504 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.504 [2024-07-24 23:06:38.870352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.504 [2024-07-24 23:06:38.902238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.443 23:06:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:07.443 23:06:39 -- common/autotest_common.sh@852 -- # return 0 00:21:07.443 23:06:39 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:07.443 [2024-07-24 23:06:39.746186] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:07.443 [2024-07-24 23:06:39.747463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1ef30 (9): Bad file descriptor 00:21:07.443 [2024-07-24 23:06:39.748463] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:07.443 [2024-07-24 23:06:39.748477] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:07.443 [2024-07-24 23:06:39.748488] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:07.443 request: 00:21:07.443 { 00:21:07.443 "name": "TLSTEST", 00:21:07.443 "trtype": "tcp", 00:21:07.443 "traddr": "10.0.0.2", 00:21:07.443 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.443 "adrfam": "ipv4", 00:21:07.443 "trsvcid": "4420", 00:21:07.443 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.443 "method": "bdev_nvme_attach_controller", 00:21:07.443 "req_id": 1 00:21:07.443 } 00:21:07.443 Got JSON-RPC error response 00:21:07.443 response: 00:21:07.443 { 00:21:07.443 "code": -32602, 00:21:07.443 "message": "Invalid parameters" 00:21:07.443 } 00:21:07.443 23:06:39 -- target/tls.sh@36 -- # killprocess 3253474 00:21:07.443 23:06:39 -- common/autotest_common.sh@926 -- # '[' -z 3253474 ']' 00:21:07.443 23:06:39 -- common/autotest_common.sh@930 -- # kill -0 3253474 00:21:07.443 23:06:39 -- common/autotest_common.sh@931 -- # uname 00:21:07.443 23:06:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:07.443 23:06:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3253474 00:21:07.443 23:06:39 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:07.443 23:06:39 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:07.443 23:06:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3253474' 00:21:07.443 killing process with pid 3253474 00:21:07.443 23:06:39 -- common/autotest_common.sh@945 -- # kill 3253474 00:21:07.443 Received shutdown signal, test time was about 10.000000 seconds 00:21:07.443 00:21:07.443 Latency(us) 00:21:07.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.443 =================================================================================================================== 00:21:07.443 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:07.443 23:06:39 -- common/autotest_common.sh@950 -- # wait 3253474 00:21:07.702 23:06:39 -- target/tls.sh@37 -- # return 1 00:21:07.702 23:06:39 -- common/autotest_common.sh@643 -- # es=1 00:21:07.702 23:06:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:07.702 23:06:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:07.702 23:06:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:07.702 23:06:39 -- target/tls.sh@167 -- # killprocess 3248322 00:21:07.702 23:06:39 -- common/autotest_common.sh@926 -- # '[' -z 3248322 ']' 00:21:07.702 23:06:39 -- common/autotest_common.sh@930 -- # kill -0 3248322 00:21:07.702 23:06:39 -- common/autotest_common.sh@931 -- # uname 00:21:07.702 23:06:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:07.702 23:06:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3248322 00:21:07.702 23:06:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:07.702 23:06:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:07.702 23:06:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3248322' 00:21:07.702 killing process with pid 3248322 00:21:07.702 23:06:40 -- common/autotest_common.sh@945 -- # kill 3248322 00:21:07.702 23:06:40 -- common/autotest_common.sh@950 -- # wait 3248322 00:21:07.962 23:06:40 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:21:07.962 23:06:40 -- target/tls.sh@49 -- # local key hash crc 00:21:07.962 23:06:40 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:07.962 23:06:40 -- target/tls.sh@51 -- # hash=02 00:21:07.962 23:06:40 -- target/tls.sh@52 -- # tail -c8 00:21:07.962 23:06:40 -- target/tls.sh@52 -- # gzip -1 -c 00:21:07.962 23:06:40 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:21:07.962 23:06:40 -- target/tls.sh@52 -- # head -c 4 00:21:07.962 23:06:40 -- target/tls.sh@52 -- # crc='�e�'\''' 00:21:07.962 23:06:40 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:07.962 23:06:40 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:21:07.962 23:06:40 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:07.962 23:06:40 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:07.962 23:06:40 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:07.962 23:06:40 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:07.962 23:06:40 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:07.962 23:06:40 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:21:07.962 23:06:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:07.962 23:06:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:07.962 23:06:40 -- common/autotest_common.sh@10 -- # set +x 00:21:07.962 23:06:40 -- nvmf/common.sh@469 -- # nvmfpid=3253769 00:21:07.962 23:06:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:07.963 23:06:40 -- nvmf/common.sh@470 -- # waitforlisten 3253769 00:21:07.963 23:06:40 -- common/autotest_common.sh@819 -- # '[' -z 3253769 ']' 00:21:07.963 23:06:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.963 23:06:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:07.963 23:06:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.963 23:06:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:07.963 23:06:40 -- common/autotest_common.sh@10 -- # set +x 00:21:07.963 [2024-07-24 23:06:40.310566] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:07.963 [2024-07-24 23:06:40.310616] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.963 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.963 [2024-07-24 23:06:40.384871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.222 [2024-07-24 23:06:40.422278] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:08.222 [2024-07-24 23:06:40.422385] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.222 [2024-07-24 23:06:40.422397] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.222 [2024-07-24 23:06:40.422406] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.222 [2024-07-24 23:06:40.422432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.791 23:06:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:08.791 23:06:41 -- common/autotest_common.sh@852 -- # return 0 00:21:08.791 23:06:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:08.791 23:06:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:08.791 23:06:41 -- common/autotest_common.sh@10 -- # set +x 00:21:08.791 23:06:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.791 23:06:41 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:08.791 23:06:41 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:08.791 23:06:41 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:09.050 [2024-07-24 23:06:41.287528] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.051 23:06:41 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:09.051 23:06:41 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:09.310 [2024-07-24 23:06:41.600330] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:09.310 [2024-07-24 23:06:41.600532] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.310 23:06:41 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:09.569 malloc0 00:21:09.569 23:06:41 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:09.569 23:06:41 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:09.829 23:06:42 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:09.829 23:06:42 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:09.829 23:06:42 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:09.829 23:06:42 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:09.829 23:06:42 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:21:09.829 23:06:42 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:09.829 23:06:42 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:09.829 23:06:42 -- target/tls.sh@28 -- # bdevperf_pid=3254063 00:21:09.829 23:06:42 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:09.829 23:06:42 -- target/tls.sh@31 -- # waitforlisten 3254063 /var/tmp/bdevperf.sock 00:21:09.829 23:06:42 -- common/autotest_common.sh@819 -- # '[' -z 3254063 ']' 00:21:09.829 23:06:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:09.829 23:06:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:09.829 23:06:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:09.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:09.829 23:06:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:09.829 23:06:42 -- common/autotest_common.sh@10 -- # set +x 00:21:09.829 [2024-07-24 23:06:42.107763] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:09.829 [2024-07-24 23:06:42.107818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3254063 ] 00:21:09.829 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.829 [2024-07-24 23:06:42.174484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.829 [2024-07-24 23:06:42.210827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.768 23:06:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:10.768 23:06:42 -- common/autotest_common.sh@852 -- # return 0 00:21:10.768 23:06:42 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:10.768 [2024-07-24 23:06:43.043896] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:10.768 TLSTESTn1 00:21:10.768 23:06:43 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:11.028 Running I/O for 10 seconds... 00:21:21.014 00:21:21.014 Latency(us) 00:21:21.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.014 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:21.014 Verification LBA range: start 0x0 length 0x2000 00:21:21.014 TLSTESTn1 : 10.02 3811.29 14.89 0.00 0.00 33553.37 3460.30 57461.96 00:21:21.014 =================================================================================================================== 00:21:21.014 Total : 3811.29 14.89 0.00 0.00 33553.37 3460.30 57461.96 00:21:21.014 0 00:21:21.014 23:06:53 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:21.014 23:06:53 -- target/tls.sh@45 -- # killprocess 3254063 00:21:21.014 23:06:53 -- common/autotest_common.sh@926 -- # '[' -z 3254063 ']' 00:21:21.014 23:06:53 -- common/autotest_common.sh@930 -- # kill -0 3254063 00:21:21.014 23:06:53 -- common/autotest_common.sh@931 -- # uname 00:21:21.014 23:06:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:21.014 23:06:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3254063 00:21:21.014 23:06:53 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:21.014 23:06:53 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:21.014 23:06:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3254063' 00:21:21.014 killing process with pid 3254063 00:21:21.014 23:06:53 -- common/autotest_common.sh@945 -- # kill 3254063 00:21:21.014 Received shutdown signal, test time was about 10.000000 seconds 00:21:21.014 00:21:21.014 Latency(us) 00:21:21.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.014 =================================================================================================================== 00:21:21.014 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:21.014 23:06:53 -- common/autotest_common.sh@950 -- # wait 3254063 00:21:21.274 23:06:53 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:21.274 23:06:53 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:21.274 23:06:53 -- common/autotest_common.sh@640 -- # local es=0 00:21:21.274 23:06:53 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:21.274 23:06:53 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:21.274 23:06:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:21.274 23:06:53 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:21.274 23:06:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:21.274 23:06:53 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:21.274 23:06:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:21.274 23:06:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:21.274 23:06:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:21.274 23:06:53 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:21:21.274 23:06:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:21.274 23:06:53 -- target/tls.sh@28 -- # bdevperf_pid=3255941 00:21:21.274 23:06:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:21.274 23:06:53 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:21.274 23:06:53 -- target/tls.sh@31 -- # waitforlisten 3255941 /var/tmp/bdevperf.sock 00:21:21.274 23:06:53 -- common/autotest_common.sh@819 -- # '[' -z 3255941 ']' 00:21:21.274 23:06:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.274 23:06:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:21.274 23:06:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.274 23:06:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:21.274 23:06:53 -- common/autotest_common.sh@10 -- # set +x 00:21:21.274 [2024-07-24 23:06:53.562878] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:21.274 [2024-07-24 23:06:53.562930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255941 ] 00:21:21.274 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.274 [2024-07-24 23:06:53.629996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.274 [2024-07-24 23:06:53.664253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.212 23:06:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:22.212 23:06:54 -- common/autotest_common.sh@852 -- # return 0 00:21:22.212 23:06:54 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:22.212 [2024-07-24 23:06:54.497792] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:22.212 [2024-07-24 23:06:54.497834] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:22.212 request: 00:21:22.212 { 00:21:22.212 "name": "TLSTEST", 00:21:22.212 "trtype": "tcp", 00:21:22.212 "traddr": "10.0.0.2", 00:21:22.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.212 "adrfam": "ipv4", 00:21:22.212 "trsvcid": "4420", 00:21:22.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.212 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:21:22.212 "method": "bdev_nvme_attach_controller", 00:21:22.212 "req_id": 1 00:21:22.212 } 00:21:22.212 Got JSON-RPC error response 00:21:22.212 response: 00:21:22.212 { 00:21:22.212 "code": -22, 00:21:22.212 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:21:22.212 } 00:21:22.212 23:06:54 -- target/tls.sh@36 -- # killprocess 3255941 00:21:22.212 23:06:54 -- common/autotest_common.sh@926 -- # '[' -z 3255941 ']' 00:21:22.212 23:06:54 -- common/autotest_common.sh@930 -- # kill -0 3255941 00:21:22.212 23:06:54 -- common/autotest_common.sh@931 -- # uname 00:21:22.212 23:06:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:22.212 23:06:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3255941 00:21:22.212 23:06:54 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:22.212 23:06:54 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:22.212 23:06:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3255941' 00:21:22.212 killing process with pid 3255941 00:21:22.212 23:06:54 -- common/autotest_common.sh@945 -- # kill 3255941 00:21:22.212 Received shutdown signal, test time was about 10.000000 seconds 00:21:22.212 00:21:22.212 Latency(us) 00:21:22.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.212 =================================================================================================================== 00:21:22.212 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:22.212 23:06:54 -- common/autotest_common.sh@950 -- # wait 3255941 00:21:22.472 23:06:54 -- target/tls.sh@37 -- # return 1 00:21:22.472 23:06:54 -- common/autotest_common.sh@643 -- # es=1 00:21:22.472 23:06:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:22.472 23:06:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:22.472 23:06:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:22.472 23:06:54 -- target/tls.sh@183 -- # killprocess 3253769 00:21:22.472 23:06:54 -- common/autotest_common.sh@926 -- # '[' -z 3253769 ']' 00:21:22.472 23:06:54 -- common/autotest_common.sh@930 -- # kill -0 3253769 00:21:22.472 23:06:54 -- common/autotest_common.sh@931 -- # uname 00:21:22.472 23:06:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:22.472 23:06:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3253769 00:21:22.472 23:06:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:22.472 23:06:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:22.472 23:06:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3253769' 00:21:22.472 killing process with pid 3253769 00:21:22.472 23:06:54 -- common/autotest_common.sh@945 -- # kill 3253769 00:21:22.472 23:06:54 -- common/autotest_common.sh@950 -- # wait 3253769 00:21:22.732 23:06:54 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:22.732 23:06:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:22.732 23:06:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:22.732 23:06:54 -- common/autotest_common.sh@10 -- # set +x 00:21:22.732 23:06:54 -- nvmf/common.sh@469 -- # nvmfpid=3256223 00:21:22.732 23:06:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:22.732 23:06:54 -- nvmf/common.sh@470 -- # waitforlisten 3256223 00:21:22.732 23:06:54 -- common/autotest_common.sh@819 -- # '[' -z 3256223 ']' 00:21:22.732 23:06:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.732 23:06:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:22.732 23:06:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.732 23:06:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:22.732 23:06:54 -- common/autotest_common.sh@10 -- # set +x 00:21:22.732 [2024-07-24 23:06:55.022107] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:22.732 [2024-07-24 23:06:55.022159] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.732 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.732 [2024-07-24 23:06:55.095953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.732 [2024-07-24 23:06:55.127472] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:22.732 [2024-07-24 23:06:55.127580] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.732 [2024-07-24 23:06:55.127589] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.732 [2024-07-24 23:06:55.127598] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.732 [2024-07-24 23:06:55.127617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.670 23:06:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:23.670 23:06:55 -- common/autotest_common.sh@852 -- # return 0 00:21:23.670 23:06:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:23.670 23:06:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:23.670 23:06:55 -- common/autotest_common.sh@10 -- # set +x 00:21:23.670 23:06:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.670 23:06:55 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:23.670 23:06:55 -- common/autotest_common.sh@640 -- # local es=0 00:21:23.670 23:06:55 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:23.670 23:06:55 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:21:23.670 23:06:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:23.670 23:06:55 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:21:23.670 23:06:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:23.670 23:06:55 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:23.670 23:06:55 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:23.670 23:06:55 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:23.670 [2024-07-24 23:06:56.009640] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.670 23:06:56 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:23.929 23:06:56 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:23.929 [2024-07-24 23:06:56.322431] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:23.929 [2024-07-24 23:06:56.322618] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.929 23:06:56 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:24.188 malloc0 00:21:24.188 23:06:56 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:24.448 23:06:56 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:24.448 [2024-07-24 23:06:56.783682] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:24.448 [2024-07-24 23:06:56.783704] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:24.448 [2024-07-24 23:06:56.783723] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:21:24.448 request: 00:21:24.448 { 00:21:24.448 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.448 "host": "nqn.2016-06.io.spdk:host1", 00:21:24.448 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:21:24.448 "method": "nvmf_subsystem_add_host", 00:21:24.448 "req_id": 1 00:21:24.448 } 00:21:24.448 Got JSON-RPC error response 00:21:24.448 response: 00:21:24.448 { 00:21:24.448 "code": -32603, 00:21:24.448 "message": "Internal error" 00:21:24.448 } 00:21:24.448 23:06:56 -- common/autotest_common.sh@643 -- # es=1 00:21:24.448 23:06:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:24.448 23:06:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:24.448 23:06:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:24.448 23:06:56 -- target/tls.sh@189 -- # killprocess 3256223 00:21:24.448 23:06:56 -- common/autotest_common.sh@926 -- # '[' -z 3256223 ']' 00:21:24.448 23:06:56 -- common/autotest_common.sh@930 -- # kill -0 3256223 00:21:24.448 23:06:56 -- common/autotest_common.sh@931 -- # uname 00:21:24.448 23:06:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:24.448 23:06:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3256223 00:21:24.448 23:06:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:24.448 23:06:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:24.448 23:06:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3256223' 00:21:24.448 killing process with pid 3256223 00:21:24.448 23:06:56 -- common/autotest_common.sh@945 -- # kill 3256223 00:21:24.448 23:06:56 -- common/autotest_common.sh@950 -- # wait 3256223 00:21:24.708 23:06:57 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:24.708 23:06:57 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:21:24.708 23:06:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:24.708 23:06:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:24.708 23:06:57 -- common/autotest_common.sh@10 -- # set +x 00:21:24.708 23:06:57 -- nvmf/common.sh@469 -- # nvmfpid=3256600 00:21:24.708 23:06:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:24.708 23:06:57 -- nvmf/common.sh@470 -- # waitforlisten 3256600 00:21:24.708 23:06:57 -- common/autotest_common.sh@819 -- # '[' -z 3256600 ']' 00:21:24.708 23:06:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.708 23:06:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:24.708 23:06:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.708 23:06:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:24.708 23:06:57 -- common/autotest_common.sh@10 -- # set +x 00:21:24.708 [2024-07-24 23:06:57.101739] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:24.708 [2024-07-24 23:06:57.101792] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.967 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.967 [2024-07-24 23:06:57.178539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.967 [2024-07-24 23:06:57.213098] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:24.968 [2024-07-24 23:06:57.213208] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.968 [2024-07-24 23:06:57.213218] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.968 [2024-07-24 23:06:57.213226] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.968 [2024-07-24 23:06:57.213251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.536 23:06:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:25.536 23:06:57 -- common/autotest_common.sh@852 -- # return 0 00:21:25.536 23:06:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:25.536 23:06:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:25.536 23:06:57 -- common/autotest_common.sh@10 -- # set +x 00:21:25.536 23:06:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.536 23:06:57 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:25.536 23:06:57 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:25.536 23:06:57 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:25.795 [2024-07-24 23:06:58.074733] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.795 23:06:58 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:26.055 23:06:58 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:26.055 [2024-07-24 23:06:58.399569] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:26.055 [2024-07-24 23:06:58.399788] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.055 23:06:58 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:26.314 malloc0 00:21:26.314 23:06:58 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:26.314 23:06:58 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:26.574 23:06:58 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:26.574 23:06:58 -- target/tls.sh@197 -- # bdevperf_pid=3257003 00:21:26.574 23:06:58 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:26.574 23:06:58 -- target/tls.sh@200 -- # waitforlisten 3257003 /var/tmp/bdevperf.sock 00:21:26.574 23:06:58 -- common/autotest_common.sh@819 -- # '[' -z 3257003 ']' 00:21:26.574 23:06:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.574 23:06:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:26.574 23:06:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.574 23:06:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:26.574 23:06:58 -- common/autotest_common.sh@10 -- # set +x 00:21:26.574 [2024-07-24 23:06:58.916760] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:26.574 [2024-07-24 23:06:58.916812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3257003 ] 00:21:26.574 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.574 [2024-07-24 23:06:58.983471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.834 [2024-07-24 23:06:59.020359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.403 23:06:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:27.403 23:06:59 -- common/autotest_common.sh@852 -- # return 0 00:21:27.403 23:06:59 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:27.662 [2024-07-24 23:06:59.890527] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:27.662 TLSTESTn1 00:21:27.662 23:06:59 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:27.922 23:07:00 -- target/tls.sh@205 -- # tgtconf='{ 00:21:27.922 "subsystems": [ 00:21:27.922 { 00:21:27.922 "subsystem": "iobuf", 00:21:27.922 "config": [ 00:21:27.922 { 00:21:27.922 "method": "iobuf_set_options", 00:21:27.922 "params": { 00:21:27.922 "small_pool_count": 8192, 00:21:27.922 "large_pool_count": 1024, 00:21:27.922 "small_bufsize": 8192, 00:21:27.922 "large_bufsize": 135168 00:21:27.922 } 00:21:27.922 } 00:21:27.922 ] 00:21:27.922 }, 00:21:27.922 { 00:21:27.922 "subsystem": "sock", 00:21:27.922 "config": [ 00:21:27.922 { 00:21:27.922 "method": "sock_impl_set_options", 00:21:27.922 "params": { 00:21:27.922 "impl_name": "posix", 00:21:27.922 "recv_buf_size": 2097152, 00:21:27.922 "send_buf_size": 2097152, 00:21:27.922 "enable_recv_pipe": true, 00:21:27.922 "enable_quickack": false, 00:21:27.922 "enable_placement_id": 0, 00:21:27.922 "enable_zerocopy_send_server": true, 00:21:27.922 "enable_zerocopy_send_client": false, 00:21:27.922 "zerocopy_threshold": 0, 00:21:27.922 "tls_version": 0, 00:21:27.922 "enable_ktls": false 00:21:27.922 } 00:21:27.922 }, 00:21:27.922 { 00:21:27.922 "method": "sock_impl_set_options", 00:21:27.922 "params": { 00:21:27.922 "impl_name": "ssl", 00:21:27.922 "recv_buf_size": 4096, 00:21:27.922 "send_buf_size": 4096, 00:21:27.922 "enable_recv_pipe": true, 00:21:27.922 "enable_quickack": false, 00:21:27.922 "enable_placement_id": 0, 00:21:27.922 "enable_zerocopy_send_server": true, 00:21:27.922 "enable_zerocopy_send_client": false, 00:21:27.922 "zerocopy_threshold": 0, 00:21:27.922 "tls_version": 0, 00:21:27.922 "enable_ktls": false 00:21:27.922 } 00:21:27.922 } 00:21:27.922 ] 00:21:27.922 }, 00:21:27.922 { 00:21:27.922 "subsystem": "vmd", 00:21:27.922 "config": [] 00:21:27.922 }, 00:21:27.922 { 00:21:27.922 "subsystem": "accel", 00:21:27.922 "config": [ 00:21:27.922 { 00:21:27.922 "method": "accel_set_options", 00:21:27.922 "params": { 00:21:27.922 "small_cache_size": 128, 00:21:27.922 "large_cache_size": 16, 00:21:27.922 "task_count": 2048, 00:21:27.922 "sequence_count": 2048, 00:21:27.922 "buf_count": 2048 00:21:27.922 } 00:21:27.922 } 00:21:27.922 ] 00:21:27.922 }, 00:21:27.922 { 00:21:27.922 "subsystem": "bdev", 00:21:27.922 "config": [ 00:21:27.922 { 00:21:27.922 "method": "bdev_set_options", 00:21:27.922 "params": { 00:21:27.922 "bdev_io_pool_size": 65535, 00:21:27.922 "bdev_io_cache_size": 256, 00:21:27.922 "bdev_auto_examine": true, 00:21:27.922 "iobuf_small_cache_size": 128, 00:21:27.922 "iobuf_large_cache_size": 16 00:21:27.922 } 00:21:27.922 }, 00:21:27.922 { 00:21:27.922 "method": "bdev_raid_set_options", 00:21:27.922 "params": { 00:21:27.922 "process_window_size_kb": 1024 00:21:27.922 } 00:21:27.922 }, 00:21:27.922 { 00:21:27.922 "method": "bdev_iscsi_set_options", 00:21:27.922 "params": { 00:21:27.922 "timeout_sec": 30 00:21:27.922 } 00:21:27.922 }, 00:21:27.922 { 00:21:27.922 "method": "bdev_nvme_set_options", 00:21:27.922 "params": { 00:21:27.922 "action_on_timeout": "none", 00:21:27.922 "timeout_us": 0, 00:21:27.922 "timeout_admin_us": 0, 00:21:27.922 "keep_alive_timeout_ms": 10000, 00:21:27.922 "transport_retry_count": 4, 00:21:27.922 "arbitration_burst": 0, 00:21:27.922 "low_priority_weight": 0, 00:21:27.922 "medium_priority_weight": 0, 00:21:27.922 "high_priority_weight": 0, 00:21:27.922 "nvme_adminq_poll_period_us": 10000, 00:21:27.922 "nvme_ioq_poll_period_us": 0, 00:21:27.922 "io_queue_requests": 0, 00:21:27.922 "delay_cmd_submit": true, 00:21:27.922 "bdev_retry_count": 3, 00:21:27.922 "transport_ack_timeout": 0, 00:21:27.922 "ctrlr_loss_timeout_sec": 0, 00:21:27.922 "reconnect_delay_sec": 0, 00:21:27.922 "fast_io_fail_timeout_sec": 0, 00:21:27.922 "generate_uuids": false, 00:21:27.922 "transport_tos": 0, 00:21:27.922 "io_path_stat": false, 00:21:27.922 "allow_accel_sequence": false 00:21:27.922 } 00:21:27.922 }, 00:21:27.922 { 00:21:27.922 "method": "bdev_nvme_set_hotplug", 00:21:27.922 "params": { 00:21:27.922 "period_us": 100000, 00:21:27.922 "enable": false 00:21:27.922 } 00:21:27.922 }, 00:21:27.922 { 00:21:27.922 "method": "bdev_malloc_create", 00:21:27.922 "params": { 00:21:27.922 "name": "malloc0", 00:21:27.922 "num_blocks": 8192, 00:21:27.922 "block_size": 4096, 00:21:27.922 "physical_block_size": 4096, 00:21:27.922 "uuid": "827aa3cd-2294-4239-b400-31abdc499328", 00:21:27.922 "optimal_io_boundary": 0 00:21:27.922 } 00:21:27.922 }, 00:21:27.922 { 00:21:27.922 "method": "bdev_wait_for_examine" 00:21:27.922 } 00:21:27.922 ] 00:21:27.922 }, 00:21:27.922 { 00:21:27.922 "subsystem": "nbd", 00:21:27.922 "config": [] 00:21:27.922 }, 00:21:27.922 { 00:21:27.922 "subsystem": "scheduler", 00:21:27.922 "config": [ 00:21:27.922 { 00:21:27.922 "method": "framework_set_scheduler", 00:21:27.922 "params": { 00:21:27.922 "name": "static" 00:21:27.922 } 00:21:27.922 } 00:21:27.922 ] 00:21:27.922 }, 00:21:27.922 { 00:21:27.922 "subsystem": "nvmf", 00:21:27.922 "config": [ 00:21:27.922 { 00:21:27.922 "method": "nvmf_set_config", 00:21:27.922 "params": { 00:21:27.922 "discovery_filter": "match_any", 00:21:27.922 "admin_cmd_passthru": { 00:21:27.922 "identify_ctrlr": false 00:21:27.922 } 00:21:27.922 } 00:21:27.922 }, 00:21:27.922 { 00:21:27.922 "method": "nvmf_set_max_subsystems", 00:21:27.922 "params": { 00:21:27.922 "max_subsystems": 1024 00:21:27.922 } 00:21:27.922 }, 00:21:27.922 { 00:21:27.922 "method": "nvmf_set_crdt", 00:21:27.922 "params": { 00:21:27.922 "crdt1": 0, 00:21:27.922 "crdt2": 0, 00:21:27.922 "crdt3": 0 00:21:27.922 } 00:21:27.922 }, 00:21:27.922 { 00:21:27.922 "method": "nvmf_create_transport", 00:21:27.922 "params": { 00:21:27.922 "trtype": "TCP", 00:21:27.922 "max_queue_depth": 128, 00:21:27.922 "max_io_qpairs_per_ctrlr": 127, 00:21:27.922 "in_capsule_data_size": 4096, 00:21:27.922 "max_io_size": 131072, 00:21:27.922 "io_unit_size": 131072, 00:21:27.922 "max_aq_depth": 128, 00:21:27.922 "num_shared_buffers": 511, 00:21:27.922 "buf_cache_size": 4294967295, 00:21:27.922 "dif_insert_or_strip": false, 00:21:27.922 "zcopy": false, 00:21:27.922 "c2h_success": false, 00:21:27.922 "sock_priority": 0, 00:21:27.922 "abort_timeout_sec": 1 00:21:27.922 } 00:21:27.922 }, 00:21:27.923 { 00:21:27.923 "method": "nvmf_create_subsystem", 00:21:27.923 "params": { 00:21:27.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.923 "allow_any_host": false, 00:21:27.923 "serial_number": "SPDK00000000000001", 00:21:27.923 "model_number": "SPDK bdev Controller", 00:21:27.923 "max_namespaces": 10, 00:21:27.923 "min_cntlid": 1, 00:21:27.923 "max_cntlid": 65519, 00:21:27.923 "ana_reporting": false 00:21:27.923 } 00:21:27.923 }, 00:21:27.923 { 00:21:27.923 "method": "nvmf_subsystem_add_host", 00:21:27.923 "params": { 00:21:27.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.923 "host": "nqn.2016-06.io.spdk:host1", 00:21:27.923 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:21:27.923 } 00:21:27.923 }, 00:21:27.923 { 00:21:27.923 "method": "nvmf_subsystem_add_ns", 00:21:27.923 "params": { 00:21:27.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.923 "namespace": { 00:21:27.923 "nsid": 1, 00:21:27.923 "bdev_name": "malloc0", 00:21:27.923 "nguid": "827AA3CD22944239B40031ABDC499328", 00:21:27.923 "uuid": "827aa3cd-2294-4239-b400-31abdc499328" 00:21:27.923 } 00:21:27.923 } 00:21:27.923 }, 00:21:27.923 { 00:21:27.923 "method": "nvmf_subsystem_add_listener", 00:21:27.923 "params": { 00:21:27.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.923 "listen_address": { 00:21:27.923 "trtype": "TCP", 00:21:27.923 "adrfam": "IPv4", 00:21:27.923 "traddr": "10.0.0.2", 00:21:27.923 "trsvcid": "4420" 00:21:27.923 }, 00:21:27.923 "secure_channel": true 00:21:27.923 } 00:21:27.923 } 00:21:27.923 ] 00:21:27.923 } 00:21:27.923 ] 00:21:27.923 }' 00:21:27.923 23:07:00 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:28.183 23:07:00 -- target/tls.sh@206 -- # bdevperfconf='{ 00:21:28.183 "subsystems": [ 00:21:28.183 { 00:21:28.183 "subsystem": "iobuf", 00:21:28.183 "config": [ 00:21:28.183 { 00:21:28.183 "method": "iobuf_set_options", 00:21:28.183 "params": { 00:21:28.183 "small_pool_count": 8192, 00:21:28.183 "large_pool_count": 1024, 00:21:28.183 "small_bufsize": 8192, 00:21:28.183 "large_bufsize": 135168 00:21:28.183 } 00:21:28.183 } 00:21:28.183 ] 00:21:28.183 }, 00:21:28.183 { 00:21:28.183 "subsystem": "sock", 00:21:28.183 "config": [ 00:21:28.183 { 00:21:28.183 "method": "sock_impl_set_options", 00:21:28.183 "params": { 00:21:28.183 "impl_name": "posix", 00:21:28.183 "recv_buf_size": 2097152, 00:21:28.183 "send_buf_size": 2097152, 00:21:28.183 "enable_recv_pipe": true, 00:21:28.183 "enable_quickack": false, 00:21:28.183 "enable_placement_id": 0, 00:21:28.183 "enable_zerocopy_send_server": true, 00:21:28.183 "enable_zerocopy_send_client": false, 00:21:28.183 "zerocopy_threshold": 0, 00:21:28.183 "tls_version": 0, 00:21:28.183 "enable_ktls": false 00:21:28.183 } 00:21:28.183 }, 00:21:28.183 { 00:21:28.183 "method": "sock_impl_set_options", 00:21:28.183 "params": { 00:21:28.183 "impl_name": "ssl", 00:21:28.183 "recv_buf_size": 4096, 00:21:28.183 "send_buf_size": 4096, 00:21:28.183 "enable_recv_pipe": true, 00:21:28.183 "enable_quickack": false, 00:21:28.183 "enable_placement_id": 0, 00:21:28.183 "enable_zerocopy_send_server": true, 00:21:28.183 "enable_zerocopy_send_client": false, 00:21:28.183 "zerocopy_threshold": 0, 00:21:28.183 "tls_version": 0, 00:21:28.183 "enable_ktls": false 00:21:28.183 } 00:21:28.183 } 00:21:28.183 ] 00:21:28.183 }, 00:21:28.183 { 00:21:28.183 "subsystem": "vmd", 00:21:28.183 "config": [] 00:21:28.183 }, 00:21:28.183 { 00:21:28.183 "subsystem": "accel", 00:21:28.183 "config": [ 00:21:28.183 { 00:21:28.183 "method": "accel_set_options", 00:21:28.183 "params": { 00:21:28.183 "small_cache_size": 128, 00:21:28.183 "large_cache_size": 16, 00:21:28.183 "task_count": 2048, 00:21:28.183 "sequence_count": 2048, 00:21:28.183 "buf_count": 2048 00:21:28.183 } 00:21:28.183 } 00:21:28.183 ] 00:21:28.183 }, 00:21:28.183 { 00:21:28.183 "subsystem": "bdev", 00:21:28.183 "config": [ 00:21:28.183 { 00:21:28.183 "method": "bdev_set_options", 00:21:28.183 "params": { 00:21:28.183 "bdev_io_pool_size": 65535, 00:21:28.183 "bdev_io_cache_size": 256, 00:21:28.183 "bdev_auto_examine": true, 00:21:28.183 "iobuf_small_cache_size": 128, 00:21:28.183 "iobuf_large_cache_size": 16 00:21:28.183 } 00:21:28.183 }, 00:21:28.183 { 00:21:28.183 "method": "bdev_raid_set_options", 00:21:28.183 "params": { 00:21:28.183 "process_window_size_kb": 1024 00:21:28.183 } 00:21:28.183 }, 00:21:28.183 { 00:21:28.183 "method": "bdev_iscsi_set_options", 00:21:28.183 "params": { 00:21:28.183 "timeout_sec": 30 00:21:28.183 } 00:21:28.183 }, 00:21:28.183 { 00:21:28.183 "method": "bdev_nvme_set_options", 00:21:28.183 "params": { 00:21:28.183 "action_on_timeout": "none", 00:21:28.183 "timeout_us": 0, 00:21:28.183 "timeout_admin_us": 0, 00:21:28.183 "keep_alive_timeout_ms": 10000, 00:21:28.183 "transport_retry_count": 4, 00:21:28.183 "arbitration_burst": 0, 00:21:28.183 "low_priority_weight": 0, 00:21:28.183 "medium_priority_weight": 0, 00:21:28.183 "high_priority_weight": 0, 00:21:28.183 "nvme_adminq_poll_period_us": 10000, 00:21:28.183 "nvme_ioq_poll_period_us": 0, 00:21:28.183 "io_queue_requests": 512, 00:21:28.183 "delay_cmd_submit": true, 00:21:28.183 "bdev_retry_count": 3, 00:21:28.183 "transport_ack_timeout": 0, 00:21:28.183 "ctrlr_loss_timeout_sec": 0, 00:21:28.183 "reconnect_delay_sec": 0, 00:21:28.183 "fast_io_fail_timeout_sec": 0, 00:21:28.183 "generate_uuids": false, 00:21:28.183 "transport_tos": 0, 00:21:28.183 "io_path_stat": false, 00:21:28.183 "allow_accel_sequence": false 00:21:28.183 } 00:21:28.183 }, 00:21:28.183 { 00:21:28.183 "method": "bdev_nvme_attach_controller", 00:21:28.183 "params": { 00:21:28.183 "name": "TLSTEST", 00:21:28.183 "trtype": "TCP", 00:21:28.183 "adrfam": "IPv4", 00:21:28.183 "traddr": "10.0.0.2", 00:21:28.183 "trsvcid": "4420", 00:21:28.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.183 "prchk_reftag": false, 00:21:28.183 "prchk_guard": false, 00:21:28.183 "ctrlr_loss_timeout_sec": 0, 00:21:28.183 "reconnect_delay_sec": 0, 00:21:28.183 "fast_io_fail_timeout_sec": 0, 00:21:28.183 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:21:28.183 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:28.183 "hdgst": false, 00:21:28.183 "ddgst": false 00:21:28.183 } 00:21:28.183 }, 00:21:28.183 { 00:21:28.183 "method": "bdev_nvme_set_hotplug", 00:21:28.183 "params": { 00:21:28.183 "period_us": 100000, 00:21:28.183 "enable": false 00:21:28.183 } 00:21:28.183 }, 00:21:28.183 { 00:21:28.183 "method": "bdev_wait_for_examine" 00:21:28.183 } 00:21:28.183 ] 00:21:28.183 }, 00:21:28.183 { 00:21:28.183 "subsystem": "nbd", 00:21:28.183 "config": [] 00:21:28.183 } 00:21:28.183 ] 00:21:28.183 }' 00:21:28.183 23:07:00 -- target/tls.sh@208 -- # killprocess 3257003 00:21:28.183 23:07:00 -- common/autotest_common.sh@926 -- # '[' -z 3257003 ']' 00:21:28.183 23:07:00 -- common/autotest_common.sh@930 -- # kill -0 3257003 00:21:28.183 23:07:00 -- common/autotest_common.sh@931 -- # uname 00:21:28.183 23:07:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:28.183 23:07:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3257003 00:21:28.183 23:07:00 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:28.183 23:07:00 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:28.183 23:07:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3257003' 00:21:28.183 killing process with pid 3257003 00:21:28.183 23:07:00 -- common/autotest_common.sh@945 -- # kill 3257003 00:21:28.183 Received shutdown signal, test time was about 10.000000 seconds 00:21:28.183 00:21:28.183 Latency(us) 00:21:28.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.183 =================================================================================================================== 00:21:28.183 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:28.183 23:07:00 -- common/autotest_common.sh@950 -- # wait 3257003 00:21:28.443 23:07:00 -- target/tls.sh@209 -- # killprocess 3256600 00:21:28.443 23:07:00 -- common/autotest_common.sh@926 -- # '[' -z 3256600 ']' 00:21:28.443 23:07:00 -- common/autotest_common.sh@930 -- # kill -0 3256600 00:21:28.443 23:07:00 -- common/autotest_common.sh@931 -- # uname 00:21:28.443 23:07:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:28.443 23:07:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3256600 00:21:28.443 23:07:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:28.443 23:07:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:28.443 23:07:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3256600' 00:21:28.443 killing process with pid 3256600 00:21:28.443 23:07:00 -- common/autotest_common.sh@945 -- # kill 3256600 00:21:28.443 23:07:00 -- common/autotest_common.sh@950 -- # wait 3256600 00:21:28.702 23:07:00 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:28.702 23:07:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:28.702 23:07:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:28.702 23:07:00 -- target/tls.sh@212 -- # echo '{ 00:21:28.702 "subsystems": [ 00:21:28.702 { 00:21:28.702 "subsystem": "iobuf", 00:21:28.702 "config": [ 00:21:28.702 { 00:21:28.702 "method": "iobuf_set_options", 00:21:28.702 "params": { 00:21:28.702 "small_pool_count": 8192, 00:21:28.702 "large_pool_count": 1024, 00:21:28.702 "small_bufsize": 8192, 00:21:28.702 "large_bufsize": 135168 00:21:28.702 } 00:21:28.702 } 00:21:28.702 ] 00:21:28.702 }, 00:21:28.702 { 00:21:28.702 "subsystem": "sock", 00:21:28.702 "config": [ 00:21:28.702 { 00:21:28.702 "method": "sock_impl_set_options", 00:21:28.702 "params": { 00:21:28.702 "impl_name": "posix", 00:21:28.702 "recv_buf_size": 2097152, 00:21:28.702 "send_buf_size": 2097152, 00:21:28.702 "enable_recv_pipe": true, 00:21:28.702 "enable_quickack": false, 00:21:28.702 "enable_placement_id": 0, 00:21:28.702 "enable_zerocopy_send_server": true, 00:21:28.702 "enable_zerocopy_send_client": false, 00:21:28.702 "zerocopy_threshold": 0, 00:21:28.702 "tls_version": 0, 00:21:28.702 "enable_ktls": false 00:21:28.702 } 00:21:28.702 }, 00:21:28.702 { 00:21:28.702 "method": "sock_impl_set_options", 00:21:28.702 "params": { 00:21:28.702 "impl_name": "ssl", 00:21:28.702 "recv_buf_size": 4096, 00:21:28.702 "send_buf_size": 4096, 00:21:28.702 "enable_recv_pipe": true, 00:21:28.702 "enable_quickack": false, 00:21:28.702 "enable_placement_id": 0, 00:21:28.702 "enable_zerocopy_send_server": true, 00:21:28.702 "enable_zerocopy_send_client": false, 00:21:28.702 "zerocopy_threshold": 0, 00:21:28.702 "tls_version": 0, 00:21:28.702 "enable_ktls": false 00:21:28.702 } 00:21:28.702 } 00:21:28.702 ] 00:21:28.702 }, 00:21:28.702 { 00:21:28.702 "subsystem": "vmd", 00:21:28.702 "config": [] 00:21:28.702 }, 00:21:28.702 { 00:21:28.702 "subsystem": "accel", 00:21:28.702 "config": [ 00:21:28.702 { 00:21:28.702 "method": "accel_set_options", 00:21:28.702 "params": { 00:21:28.702 "small_cache_size": 128, 00:21:28.702 "large_cache_size": 16, 00:21:28.702 "task_count": 2048, 00:21:28.702 "sequence_count": 2048, 00:21:28.702 "buf_count": 2048 00:21:28.702 } 00:21:28.702 } 00:21:28.702 ] 00:21:28.702 }, 00:21:28.702 { 00:21:28.702 "subsystem": "bdev", 00:21:28.702 "config": [ 00:21:28.702 { 00:21:28.702 "method": "bdev_set_options", 00:21:28.702 "params": { 00:21:28.702 "bdev_io_pool_size": 65535, 00:21:28.702 "bdev_io_cache_size": 256, 00:21:28.702 "bdev_auto_examine": true, 00:21:28.702 "iobuf_small_cache_size": 128, 00:21:28.702 "iobuf_large_cache_size": 16 00:21:28.702 } 00:21:28.702 }, 00:21:28.702 { 00:21:28.702 "method": "bdev_raid_set_options", 00:21:28.702 "params": { 00:21:28.702 "process_window_size_kb": 1024 00:21:28.702 } 00:21:28.702 }, 00:21:28.702 { 00:21:28.702 "method": "bdev_iscsi_set_options", 00:21:28.702 "params": { 00:21:28.702 "timeout_sec": 30 00:21:28.702 } 00:21:28.702 }, 00:21:28.702 { 00:21:28.702 "method": "bdev_nvme_set_options", 00:21:28.702 "params": { 00:21:28.702 "action_on_timeout": "none", 00:21:28.702 "timeout_us": 0, 00:21:28.702 "timeout_admin_us": 0, 00:21:28.702 "keep_alive_timeout_ms": 10000, 00:21:28.702 "transport_retry_count": 4, 00:21:28.702 "arbitration_burst": 0, 00:21:28.702 "low_priority_weight": 0, 00:21:28.702 "medium_priority_weight": 0, 00:21:28.703 "high_priority_weight": 0, 00:21:28.703 "nvme_adminq_poll_period_us": 10000, 00:21:28.703 "nvme_ioq_poll_period_us": 0, 00:21:28.703 "io_queue_requests": 0, 00:21:28.703 "delay_cmd_submit": true, 00:21:28.703 "bdev_retry_count": 3, 00:21:28.703 "transport_ack_timeout": 0, 00:21:28.703 "ctrlr_loss_timeout_sec": 0, 00:21:28.703 "reconnect_delay_sec": 0, 00:21:28.703 "fast_io_fail_timeout_sec": 0, 00:21:28.703 "generate_uuids": false, 00:21:28.703 "transport_tos": 0, 00:21:28.703 "io_path_stat": false, 00:21:28.703 "allow_accel_sequence": false 00:21:28.703 } 00:21:28.703 }, 00:21:28.703 { 00:21:28.703 "method": "bdev_nvme_set_hotplug", 00:21:28.703 "params": { 00:21:28.703 "period_us": 100000, 00:21:28.703 "enable": false 00:21:28.703 } 00:21:28.703 }, 00:21:28.703 { 00:21:28.703 "method": "bdev_malloc_create", 00:21:28.703 "params": { 00:21:28.703 "name": "malloc0", 00:21:28.703 "num_blocks": 8192, 00:21:28.703 "block_size": 4096, 00:21:28.703 "physical_block_size": 4096, 00:21:28.703 "uuid": "827aa3cd-2294-4239-b400-31abdc499328", 00:21:28.703 "optimal_io_boundary": 0 00:21:28.703 } 00:21:28.703 }, 00:21:28.703 { 00:21:28.703 "method": "bdev_wait_for_examine" 00:21:28.703 } 00:21:28.703 ] 00:21:28.703 }, 00:21:28.703 { 00:21:28.703 "subsystem": "nbd", 00:21:28.703 "config": [] 00:21:28.703 }, 00:21:28.703 { 00:21:28.703 "subsystem": "scheduler", 00:21:28.703 "config": [ 00:21:28.703 { 00:21:28.703 "method": "framework_set_scheduler", 00:21:28.703 "params": { 00:21:28.703 "name": "static" 00:21:28.703 } 00:21:28.703 } 00:21:28.703 ] 00:21:28.703 }, 00:21:28.703 { 00:21:28.703 "subsystem": "nvmf", 00:21:28.703 "config": [ 00:21:28.703 { 00:21:28.703 "method": "nvmf_set_config", 00:21:28.703 "params": { 00:21:28.703 "discovery_filter": "match_any", 00:21:28.703 "admin_cmd_passthru": { 00:21:28.703 "identify_ctrlr": false 00:21:28.703 } 00:21:28.703 } 00:21:28.703 }, 00:21:28.703 { 00:21:28.703 "method": "nvmf_set_max_subsystems", 00:21:28.703 "params": { 00:21:28.703 "max_subsystems": 1024 00:21:28.703 } 00:21:28.703 }, 00:21:28.703 { 00:21:28.703 "method": "nvmf_set_crdt", 00:21:28.703 "params": { 00:21:28.703 "crdt1": 0, 00:21:28.703 "crdt2": 0, 00:21:28.703 "crdt3": 0 00:21:28.703 } 00:21:28.703 }, 00:21:28.703 { 00:21:28.703 "method": "nvmf_create_transport", 00:21:28.703 "params": { 00:21:28.703 "trtype": "TCP", 00:21:28.703 "max_queue_depth": 128, 00:21:28.703 "max_io_qpairs_per_ctrlr": 127, 00:21:28.703 "in_capsule_data_size": 4096, 00:21:28.703 "max_io_size": 131072, 00:21:28.703 "io_unit_size": 131072, 00:21:28.703 "max_aq_depth": 128, 00:21:28.703 "num_shared_buffers": 511, 00:21:28.703 "buf_cache_size": 4294967295, 00:21:28.703 "dif_insert_or_strip": false, 00:21:28.703 "zcopy": false, 00:21:28.703 "c2h_success": false, 00:21:28.703 "sock_priority": 0, 00:21:28.703 "abort_timeout_sec": 1 00:21:28.703 } 00:21:28.703 }, 00:21:28.703 { 00:21:28.703 "method": "nvmf_create_subsystem", 00:21:28.703 "params": { 00:21:28.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.703 "allow_any_host": false, 00:21:28.703 "serial_number": "SPDK00000000000001", 00:21:28.703 "model_number": "SPDK bdev Controller", 00:21:28.703 "max_namespaces": 10, 00:21:28.703 "min_cntlid": 1, 00:21:28.703 "max_cntlid": 65519, 00:21:28.703 "ana_reporting": false 00:21:28.703 } 00:21:28.703 }, 00:21:28.703 { 00:21:28.703 "method": "nvmf_subsystem_add_host", 00:21:28.703 "params": { 00:21:28.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.703 "host": "nqn.2016-06.io.spdk:host1", 00:21:28.703 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:21:28.703 } 00:21:28.703 }, 00:21:28.703 { 00:21:28.703 "method": "nvmf_subsystem_add_ns", 00:21:28.703 "params": { 00:21:28.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.703 "namespace": { 00:21:28.703 "nsid": 1, 00:21:28.703 "bdev_name": "malloc0", 00:21:28.703 "nguid": "827AA3CD22944239B40031ABDC499328", 00:21:28.703 "uuid": "827aa3cd-2294-4239-b400-31abdc499328" 00:21:28.703 } 00:21:28.703 } 00:21:28.703 }, 00:21:28.703 { 00:21:28.703 "method": "nvmf_subsystem_add_listener", 00:21:28.703 "params": { 00:21:28.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.703 "listen_address": { 00:21:28.703 "trtype": "TCP", 00:21:28.703 "adrfam": "IPv4", 00:21:28.703 "traddr": "10.0.0.2", 00:21:28.703 "trsvcid": "4420" 00:21:28.703 }, 00:21:28.703 "secure_channel": true 00:21:28.703 } 00:21:28.703 } 00:21:28.703 ] 00:21:28.703 } 00:21:28.703 ] 00:21:28.703 }' 00:21:28.703 23:07:00 -- common/autotest_common.sh@10 -- # set +x 00:21:28.703 23:07:00 -- nvmf/common.sh@469 -- # nvmfpid=3257365 00:21:28.703 23:07:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:28.703 23:07:00 -- nvmf/common.sh@470 -- # waitforlisten 3257365 00:21:28.703 23:07:00 -- common/autotest_common.sh@819 -- # '[' -z 3257365 ']' 00:21:28.703 23:07:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.703 23:07:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:28.703 23:07:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.703 23:07:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:28.703 23:07:00 -- common/autotest_common.sh@10 -- # set +x 00:21:28.703 [2024-07-24 23:07:00.975059] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:28.703 [2024-07-24 23:07:00.975108] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.703 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.703 [2024-07-24 23:07:01.049318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.703 [2024-07-24 23:07:01.086450] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:28.703 [2024-07-24 23:07:01.086557] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.703 [2024-07-24 23:07:01.086567] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.703 [2024-07-24 23:07:01.086576] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.703 [2024-07-24 23:07:01.086594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.962 [2024-07-24 23:07:01.275785] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.962 [2024-07-24 23:07:01.307810] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:28.962 [2024-07-24 23:07:01.308008] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.531 23:07:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:29.531 23:07:01 -- common/autotest_common.sh@852 -- # return 0 00:21:29.531 23:07:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:29.531 23:07:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:29.531 23:07:01 -- common/autotest_common.sh@10 -- # set +x 00:21:29.531 23:07:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.531 23:07:01 -- target/tls.sh@216 -- # bdevperf_pid=3257464 00:21:29.531 23:07:01 -- target/tls.sh@217 -- # waitforlisten 3257464 /var/tmp/bdevperf.sock 00:21:29.531 23:07:01 -- common/autotest_common.sh@819 -- # '[' -z 3257464 ']' 00:21:29.531 23:07:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:29.531 23:07:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:29.531 23:07:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:29.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:29.531 23:07:01 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:29.531 23:07:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:29.531 23:07:01 -- target/tls.sh@213 -- # echo '{ 00:21:29.531 "subsystems": [ 00:21:29.531 { 00:21:29.531 "subsystem": "iobuf", 00:21:29.531 "config": [ 00:21:29.531 { 00:21:29.531 "method": "iobuf_set_options", 00:21:29.531 "params": { 00:21:29.531 "small_pool_count": 8192, 00:21:29.531 "large_pool_count": 1024, 00:21:29.531 "small_bufsize": 8192, 00:21:29.531 "large_bufsize": 135168 00:21:29.531 } 00:21:29.531 } 00:21:29.531 ] 00:21:29.531 }, 00:21:29.531 { 00:21:29.531 "subsystem": "sock", 00:21:29.531 "config": [ 00:21:29.531 { 00:21:29.531 "method": "sock_impl_set_options", 00:21:29.531 "params": { 00:21:29.531 "impl_name": "posix", 00:21:29.531 "recv_buf_size": 2097152, 00:21:29.531 "send_buf_size": 2097152, 00:21:29.531 "enable_recv_pipe": true, 00:21:29.531 "enable_quickack": false, 00:21:29.531 "enable_placement_id": 0, 00:21:29.531 "enable_zerocopy_send_server": true, 00:21:29.531 "enable_zerocopy_send_client": false, 00:21:29.531 "zerocopy_threshold": 0, 00:21:29.531 "tls_version": 0, 00:21:29.531 "enable_ktls": false 00:21:29.531 } 00:21:29.531 }, 00:21:29.531 { 00:21:29.531 "method": "sock_impl_set_options", 00:21:29.531 "params": { 00:21:29.531 "impl_name": "ssl", 00:21:29.531 "recv_buf_size": 4096, 00:21:29.531 "send_buf_size": 4096, 00:21:29.531 "enable_recv_pipe": true, 00:21:29.531 "enable_quickack": false, 00:21:29.531 "enable_placement_id": 0, 00:21:29.531 "enable_zerocopy_send_server": true, 00:21:29.531 "enable_zerocopy_send_client": false, 00:21:29.531 "zerocopy_threshold": 0, 00:21:29.531 "tls_version": 0, 00:21:29.531 "enable_ktls": false 00:21:29.531 } 00:21:29.531 } 00:21:29.531 ] 00:21:29.531 }, 00:21:29.531 { 00:21:29.531 "subsystem": "vmd", 00:21:29.531 "config": [] 00:21:29.531 }, 00:21:29.531 { 00:21:29.531 "subsystem": "accel", 00:21:29.531 "config": [ 00:21:29.531 { 00:21:29.531 "method": "accel_set_options", 00:21:29.531 "params": { 00:21:29.531 "small_cache_size": 128, 00:21:29.531 "large_cache_size": 16, 00:21:29.531 "task_count": 2048, 00:21:29.531 "sequence_count": 2048, 00:21:29.531 "buf_count": 2048 00:21:29.531 } 00:21:29.531 } 00:21:29.531 ] 00:21:29.531 }, 00:21:29.531 { 00:21:29.531 "subsystem": "bdev", 00:21:29.531 "config": [ 00:21:29.531 { 00:21:29.531 "method": "bdev_set_options", 00:21:29.531 "params": { 00:21:29.531 "bdev_io_pool_size": 65535, 00:21:29.531 "bdev_io_cache_size": 256, 00:21:29.531 "bdev_auto_examine": true, 00:21:29.532 "iobuf_small_cache_size": 128, 00:21:29.532 "iobuf_large_cache_size": 16 00:21:29.532 } 00:21:29.532 }, 00:21:29.532 { 00:21:29.532 "method": "bdev_raid_set_options", 00:21:29.532 "params": { 00:21:29.532 "process_window_size_kb": 1024 00:21:29.532 } 00:21:29.532 }, 00:21:29.532 { 00:21:29.532 "method": "bdev_iscsi_set_options", 00:21:29.532 "params": { 00:21:29.532 "timeout_sec": 30 00:21:29.532 } 00:21:29.532 }, 00:21:29.532 { 00:21:29.532 "method": "bdev_nvme_set_options", 00:21:29.532 "params": { 00:21:29.532 "action_on_timeout": "none", 00:21:29.532 "timeout_us": 0, 00:21:29.532 "timeout_admin_us": 0, 00:21:29.532 "keep_alive_timeout_ms": 10000, 00:21:29.532 "transport_retry_count": 4, 00:21:29.532 "arbitration_burst": 0, 00:21:29.532 "low_priority_weight": 0, 00:21:29.532 "medium_priority_weight": 0, 00:21:29.532 "high_priority_weight": 0, 00:21:29.532 "nvme_adminq_poll_period_us": 10000, 00:21:29.532 "nvme_ioq_poll_period_us": 0, 00:21:29.532 "io_queue_requests": 512, 00:21:29.532 "delay_cmd_submit": true, 00:21:29.532 "bdev_retry_count": 3, 00:21:29.532 "transport_ack_timeout": 0, 00:21:29.532 "ctrlr_loss_timeout_sec": 0, 00:21:29.532 "reconnect_delay_sec": 0, 00:21:29.532 "fast_io_fail_timeout_sec": 0, 00:21:29.532 "generate_uuids": false, 00:21:29.532 "transport_tos": 0, 00:21:29.532 "io_path_stat": false, 00:21:29.532 "allow_accel_sequence": false 00:21:29.532 } 00:21:29.532 }, 00:21:29.532 { 00:21:29.532 "method": "bdev_nvme_attach_controller", 00:21:29.532 "params": { 00:21:29.532 "name": "TLSTEST", 00:21:29.532 "trtype": "TCP", 00:21:29.532 "adrfam": "IPv4", 00:21:29.532 "traddr": "10.0.0.2", 00:21:29.532 "trsvcid": "4420", 00:21:29.532 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.532 "prchk_reftag": false, 00:21:29.532 "prchk_guard": false, 00:21:29.532 "ctrlr_loss_timeout_sec": 0, 00:21:29.532 "reconnect_delay_sec": 0, 00:21:29.532 "fast_io_fail_timeout_sec": 0, 00:21:29.532 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:21:29.532 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:29.532 "hdgst": false, 00:21:29.532 "ddgst": false 00:21:29.532 } 00:21:29.532 }, 00:21:29.532 { 00:21:29.532 "method": "bdev_nvme_set_hotplug", 00:21:29.532 "params": { 00:21:29.532 "period_us": 100000, 00:21:29.532 "enable": false 00:21:29.532 } 00:21:29.532 }, 00:21:29.532 { 00:21:29.532 "method": "bdev_wait_for_examine" 00:21:29.532 } 00:21:29.532 ] 00:21:29.532 }, 00:21:29.532 { 00:21:29.532 "subsystem": "nbd", 00:21:29.532 "config": [] 00:21:29.532 } 00:21:29.532 ] 00:21:29.532 }' 00:21:29.532 23:07:01 -- common/autotest_common.sh@10 -- # set +x 00:21:29.532 [2024-07-24 23:07:01.854969] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:29.532 [2024-07-24 23:07:01.855022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3257464 ] 00:21:29.532 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.532 [2024-07-24 23:07:01.923036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.532 [2024-07-24 23:07:01.959506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.790 [2024-07-24 23:07:02.087213] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:30.357 23:07:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:30.357 23:07:02 -- common/autotest_common.sh@852 -- # return 0 00:21:30.357 23:07:02 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:30.357 Running I/O for 10 seconds... 00:21:42.568 00:21:42.568 Latency(us) 00:21:42.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.568 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:42.568 Verification LBA range: start 0x0 length 0x2000 00:21:42.568 TLSTESTn1 : 10.03 3604.26 14.08 0.00 0.00 35456.13 3538.94 59139.69 00:21:42.568 =================================================================================================================== 00:21:42.568 Total : 3604.26 14.08 0.00 0.00 35456.13 3538.94 59139.69 00:21:42.568 0 00:21:42.568 23:07:12 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:42.568 23:07:12 -- target/tls.sh@223 -- # killprocess 3257464 00:21:42.568 23:07:12 -- common/autotest_common.sh@926 -- # '[' -z 3257464 ']' 00:21:42.568 23:07:12 -- common/autotest_common.sh@930 -- # kill -0 3257464 00:21:42.568 23:07:12 -- common/autotest_common.sh@931 -- # uname 00:21:42.568 23:07:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:42.568 23:07:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3257464 00:21:42.568 23:07:12 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:42.568 23:07:12 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:42.568 23:07:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3257464' 00:21:42.568 killing process with pid 3257464 00:21:42.568 23:07:12 -- common/autotest_common.sh@945 -- # kill 3257464 00:21:42.568 Received shutdown signal, test time was about 10.000000 seconds 00:21:42.568 00:21:42.568 Latency(us) 00:21:42.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.568 =================================================================================================================== 00:21:42.568 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:42.568 23:07:12 -- common/autotest_common.sh@950 -- # wait 3257464 00:21:42.568 23:07:13 -- target/tls.sh@224 -- # killprocess 3257365 00:21:42.568 23:07:13 -- common/autotest_common.sh@926 -- # '[' -z 3257365 ']' 00:21:42.568 23:07:13 -- common/autotest_common.sh@930 -- # kill -0 3257365 00:21:42.568 23:07:13 -- common/autotest_common.sh@931 -- # uname 00:21:42.568 23:07:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:42.568 23:07:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3257365 00:21:42.568 23:07:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:42.568 23:07:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:42.568 23:07:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3257365' 00:21:42.568 killing process with pid 3257365 00:21:42.568 23:07:13 -- common/autotest_common.sh@945 -- # kill 3257365 00:21:42.568 23:07:13 -- common/autotest_common.sh@950 -- # wait 3257365 00:21:42.568 23:07:13 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:21:42.568 23:07:13 -- target/tls.sh@227 -- # cleanup 00:21:42.568 23:07:13 -- target/tls.sh@15 -- # process_shm --id 0 00:21:42.568 23:07:13 -- common/autotest_common.sh@796 -- # type=--id 00:21:42.568 23:07:13 -- common/autotest_common.sh@797 -- # id=0 00:21:42.568 23:07:13 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:42.568 23:07:13 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:42.568 23:07:13 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:42.568 23:07:13 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:42.568 23:07:13 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:42.568 23:07:13 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:42.568 nvmf_trace.0 00:21:42.568 23:07:13 -- common/autotest_common.sh@811 -- # return 0 00:21:42.568 23:07:13 -- target/tls.sh@16 -- # killprocess 3257464 00:21:42.568 23:07:13 -- common/autotest_common.sh@926 -- # '[' -z 3257464 ']' 00:21:42.568 23:07:13 -- common/autotest_common.sh@930 -- # kill -0 3257464 00:21:42.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3257464) - No such process 00:21:42.568 23:07:13 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3257464 is not found' 00:21:42.568 Process with pid 3257464 is not found 00:21:42.568 23:07:13 -- target/tls.sh@17 -- # nvmftestfini 00:21:42.568 23:07:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:42.568 23:07:13 -- nvmf/common.sh@116 -- # sync 00:21:42.568 23:07:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:42.568 23:07:13 -- nvmf/common.sh@119 -- # set +e 00:21:42.568 23:07:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:42.568 23:07:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:42.568 rmmod nvme_tcp 00:21:42.568 rmmod nvme_fabrics 00:21:42.568 rmmod nvme_keyring 00:21:42.568 23:07:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:42.568 23:07:13 -- nvmf/common.sh@123 -- # set -e 00:21:42.568 23:07:13 -- nvmf/common.sh@124 -- # return 0 00:21:42.568 23:07:13 -- nvmf/common.sh@477 -- # '[' -n 3257365 ']' 00:21:42.568 23:07:13 -- nvmf/common.sh@478 -- # killprocess 3257365 00:21:42.568 23:07:13 -- common/autotest_common.sh@926 -- # '[' -z 3257365 ']' 00:21:42.568 23:07:13 -- common/autotest_common.sh@930 -- # kill -0 3257365 00:21:42.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3257365) - No such process 00:21:42.568 23:07:13 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3257365 is not found' 00:21:42.568 Process with pid 3257365 is not found 00:21:42.568 23:07:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:42.568 23:07:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:42.568 23:07:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:42.568 23:07:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:42.568 23:07:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:42.568 23:07:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.568 23:07:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.568 23:07:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.137 23:07:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:43.137 23:07:15 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:43.137 00:21:43.137 real 1m13.018s 00:21:43.137 user 1m42.281s 00:21:43.137 sys 0m32.154s 00:21:43.137 23:07:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:43.137 23:07:15 -- common/autotest_common.sh@10 -- # set +x 00:21:43.137 ************************************ 00:21:43.137 END TEST nvmf_tls 00:21:43.137 ************************************ 00:21:43.137 23:07:15 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:43.137 23:07:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:43.137 23:07:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:43.137 23:07:15 -- common/autotest_common.sh@10 -- # set +x 00:21:43.137 ************************************ 00:21:43.137 START TEST nvmf_fips 00:21:43.137 ************************************ 00:21:43.137 23:07:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:43.397 * Looking for test storage... 00:21:43.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:43.397 23:07:15 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.397 23:07:15 -- nvmf/common.sh@7 -- # uname -s 00:21:43.397 23:07:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.397 23:07:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.397 23:07:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.397 23:07:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.397 23:07:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.397 23:07:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.397 23:07:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.397 23:07:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.397 23:07:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.397 23:07:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.397 23:07:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:43.397 23:07:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:43.397 23:07:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.397 23:07:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.397 23:07:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.397 23:07:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:43.397 23:07:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.397 23:07:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.397 23:07:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.397 23:07:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.398 23:07:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.398 23:07:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.398 23:07:15 -- paths/export.sh@5 -- # export PATH 00:21:43.398 23:07:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.398 23:07:15 -- nvmf/common.sh@46 -- # : 0 00:21:43.398 23:07:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:43.398 23:07:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:43.398 23:07:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:43.398 23:07:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.398 23:07:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.398 23:07:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:43.398 23:07:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:43.398 23:07:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:43.398 23:07:15 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:43.398 23:07:15 -- fips/fips.sh@89 -- # check_openssl_version 00:21:43.398 23:07:15 -- fips/fips.sh@83 -- # local target=3.0.0 00:21:43.398 23:07:15 -- fips/fips.sh@85 -- # openssl version 00:21:43.398 23:07:15 -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:43.398 23:07:15 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:43.398 23:07:15 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:43.398 23:07:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:43.398 23:07:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:43.398 23:07:15 -- scripts/common.sh@335 -- # IFS=.-: 00:21:43.398 23:07:15 -- scripts/common.sh@335 -- # read -ra ver1 00:21:43.398 23:07:15 -- scripts/common.sh@336 -- # IFS=.-: 00:21:43.398 23:07:15 -- scripts/common.sh@336 -- # read -ra ver2 00:21:43.398 23:07:15 -- scripts/common.sh@337 -- # local 'op=>=' 00:21:43.398 23:07:15 -- scripts/common.sh@339 -- # ver1_l=3 00:21:43.398 23:07:15 -- scripts/common.sh@340 -- # ver2_l=3 00:21:43.398 23:07:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:43.398 23:07:15 -- scripts/common.sh@343 -- # case "$op" in 00:21:43.398 23:07:15 -- scripts/common.sh@347 -- # : 1 00:21:43.398 23:07:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:43.398 23:07:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.398 23:07:15 -- scripts/common.sh@364 -- # decimal 3 00:21:43.398 23:07:15 -- scripts/common.sh@352 -- # local d=3 00:21:43.398 23:07:15 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:43.398 23:07:15 -- scripts/common.sh@354 -- # echo 3 00:21:43.398 23:07:15 -- scripts/common.sh@364 -- # ver1[v]=3 00:21:43.398 23:07:15 -- scripts/common.sh@365 -- # decimal 3 00:21:43.398 23:07:15 -- scripts/common.sh@352 -- # local d=3 00:21:43.398 23:07:15 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:43.398 23:07:15 -- scripts/common.sh@354 -- # echo 3 00:21:43.398 23:07:15 -- scripts/common.sh@365 -- # ver2[v]=3 00:21:43.398 23:07:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:43.398 23:07:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:43.398 23:07:15 -- scripts/common.sh@363 -- # (( v++ )) 00:21:43.398 23:07:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.398 23:07:15 -- scripts/common.sh@364 -- # decimal 0 00:21:43.398 23:07:15 -- scripts/common.sh@352 -- # local d=0 00:21:43.398 23:07:15 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:43.398 23:07:15 -- scripts/common.sh@354 -- # echo 0 00:21:43.398 23:07:15 -- scripts/common.sh@364 -- # ver1[v]=0 00:21:43.398 23:07:15 -- scripts/common.sh@365 -- # decimal 0 00:21:43.398 23:07:15 -- scripts/common.sh@352 -- # local d=0 00:21:43.398 23:07:15 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:43.398 23:07:15 -- scripts/common.sh@354 -- # echo 0 00:21:43.398 23:07:15 -- scripts/common.sh@365 -- # ver2[v]=0 00:21:43.398 23:07:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:43.398 23:07:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:43.398 23:07:15 -- scripts/common.sh@363 -- # (( v++ )) 00:21:43.398 23:07:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.398 23:07:15 -- scripts/common.sh@364 -- # decimal 9 00:21:43.398 23:07:15 -- scripts/common.sh@352 -- # local d=9 00:21:43.398 23:07:15 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:43.398 23:07:15 -- scripts/common.sh@354 -- # echo 9 00:21:43.398 23:07:15 -- scripts/common.sh@364 -- # ver1[v]=9 00:21:43.398 23:07:15 -- scripts/common.sh@365 -- # decimal 0 00:21:43.398 23:07:15 -- scripts/common.sh@352 -- # local d=0 00:21:43.398 23:07:15 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:43.398 23:07:15 -- scripts/common.sh@354 -- # echo 0 00:21:43.398 23:07:15 -- scripts/common.sh@365 -- # ver2[v]=0 00:21:43.398 23:07:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:43.398 23:07:15 -- scripts/common.sh@366 -- # return 0 00:21:43.398 23:07:15 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:43.398 23:07:15 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:43.398 23:07:15 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:43.398 23:07:15 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:43.398 23:07:15 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:43.398 23:07:15 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:43.398 23:07:15 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:43.398 23:07:15 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:21:43.398 23:07:15 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:21:43.398 23:07:15 -- fips/fips.sh@114 -- # build_openssl_config 00:21:43.398 23:07:15 -- fips/fips.sh@37 -- # cat 00:21:43.398 23:07:15 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:43.398 23:07:15 -- fips/fips.sh@58 -- # cat - 00:21:43.398 23:07:15 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:43.398 23:07:15 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:43.398 23:07:15 -- fips/fips.sh@117 -- # mapfile -t providers 00:21:43.398 23:07:15 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:21:43.398 23:07:15 -- fips/fips.sh@117 -- # openssl list -providers 00:21:43.398 23:07:15 -- fips/fips.sh@117 -- # grep name 00:21:43.658 23:07:15 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:43.658 23:07:15 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:43.658 23:07:15 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:43.658 23:07:15 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:43.658 23:07:15 -- common/autotest_common.sh@640 -- # local es=0 00:21:43.658 23:07:15 -- fips/fips.sh@128 -- # : 00:21:43.658 23:07:15 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:43.658 23:07:15 -- common/autotest_common.sh@628 -- # local arg=openssl 00:21:43.658 23:07:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:43.658 23:07:15 -- common/autotest_common.sh@632 -- # type -t openssl 00:21:43.658 23:07:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:43.658 23:07:15 -- common/autotest_common.sh@634 -- # type -P openssl 00:21:43.658 23:07:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:43.658 23:07:15 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:21:43.658 23:07:15 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:21:43.658 23:07:15 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:21:43.658 Error setting digest 00:21:43.658 00A253664A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:43.658 00A253664A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:43.658 23:07:15 -- common/autotest_common.sh@643 -- # es=1 00:21:43.658 23:07:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:43.658 23:07:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:43.658 23:07:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:43.658 23:07:15 -- fips/fips.sh@131 -- # nvmftestinit 00:21:43.658 23:07:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:43.658 23:07:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.658 23:07:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:43.658 23:07:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:43.658 23:07:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:43.658 23:07:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.658 23:07:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.658 23:07:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.658 23:07:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:43.658 23:07:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:43.658 23:07:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:43.658 23:07:15 -- common/autotest_common.sh@10 -- # set +x 00:21:50.272 23:07:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:50.272 23:07:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:50.272 23:07:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:50.272 23:07:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:50.272 23:07:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:50.272 23:07:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:50.272 23:07:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:50.272 23:07:22 -- nvmf/common.sh@294 -- # net_devs=() 00:21:50.272 23:07:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:50.272 23:07:22 -- nvmf/common.sh@295 -- # e810=() 00:21:50.272 23:07:22 -- nvmf/common.sh@295 -- # local -ga e810 00:21:50.272 23:07:22 -- nvmf/common.sh@296 -- # x722=() 00:21:50.272 23:07:22 -- nvmf/common.sh@296 -- # local -ga x722 00:21:50.272 23:07:22 -- nvmf/common.sh@297 -- # mlx=() 00:21:50.272 23:07:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:50.272 23:07:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.272 23:07:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.272 23:07:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.272 23:07:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.272 23:07:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.272 23:07:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.272 23:07:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.272 23:07:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.272 23:07:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.272 23:07:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.272 23:07:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.272 23:07:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:50.272 23:07:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:50.272 23:07:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:50.272 23:07:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:50.272 23:07:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:50.272 23:07:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:50.272 23:07:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:50.272 23:07:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:50.272 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:50.272 23:07:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:50.272 23:07:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:50.272 23:07:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.272 23:07:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.272 23:07:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:50.272 23:07:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:50.272 23:07:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:50.272 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:50.272 23:07:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:50.272 23:07:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:50.272 23:07:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.272 23:07:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.272 23:07:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:50.272 23:07:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:50.272 23:07:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:50.272 23:07:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:50.272 23:07:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:50.272 23:07:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.272 23:07:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:50.272 23:07:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.272 23:07:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:50.272 Found net devices under 0000:af:00.0: cvl_0_0 00:21:50.272 23:07:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.272 23:07:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:50.272 23:07:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.272 23:07:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:50.272 23:07:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.272 23:07:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:50.272 Found net devices under 0000:af:00.1: cvl_0_1 00:21:50.272 23:07:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.272 23:07:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:50.272 23:07:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:50.272 23:07:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:50.272 23:07:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:50.272 23:07:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:50.272 23:07:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.272 23:07:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:50.272 23:07:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:50.272 23:07:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:50.272 23:07:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:50.272 23:07:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:50.272 23:07:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:50.272 23:07:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:50.272 23:07:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.272 23:07:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:50.272 23:07:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:50.272 23:07:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:50.272 23:07:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:50.272 23:07:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:50.272 23:07:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:50.272 23:07:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:50.272 23:07:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:50.531 23:07:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:50.531 23:07:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:50.531 23:07:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:50.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:50.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:21:50.531 00:21:50.531 --- 10.0.0.2 ping statistics --- 00:21:50.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.531 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:21:50.531 23:07:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:50.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:50.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:21:50.531 00:21:50.531 --- 10.0.0.1 ping statistics --- 00:21:50.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.531 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:21:50.531 23:07:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:50.531 23:07:22 -- nvmf/common.sh@410 -- # return 0 00:21:50.531 23:07:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:50.531 23:07:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:50.531 23:07:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:50.531 23:07:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:50.531 23:07:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:50.531 23:07:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:50.531 23:07:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:50.531 23:07:22 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:50.531 23:07:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:50.531 23:07:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:50.531 23:07:22 -- common/autotest_common.sh@10 -- # set +x 00:21:50.531 23:07:22 -- nvmf/common.sh@469 -- # nvmfpid=3263127 00:21:50.531 23:07:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:50.531 23:07:22 -- nvmf/common.sh@470 -- # waitforlisten 3263127 00:21:50.531 23:07:22 -- common/autotest_common.sh@819 -- # '[' -z 3263127 ']' 00:21:50.531 23:07:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.531 23:07:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:50.531 23:07:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.531 23:07:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:50.531 23:07:22 -- common/autotest_common.sh@10 -- # set +x 00:21:50.531 [2024-07-24 23:07:22.853982] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:50.531 [2024-07-24 23:07:22.854035] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.531 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.531 [2024-07-24 23:07:22.944920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.791 [2024-07-24 23:07:22.981299] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:50.791 [2024-07-24 23:07:22.981405] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.791 [2024-07-24 23:07:22.981415] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.791 [2024-07-24 23:07:22.981424] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.791 [2024-07-24 23:07:22.981442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.359 23:07:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:51.359 23:07:23 -- common/autotest_common.sh@852 -- # return 0 00:21:51.359 23:07:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:51.359 23:07:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:51.359 23:07:23 -- common/autotest_common.sh@10 -- # set +x 00:21:51.359 23:07:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.359 23:07:23 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:51.359 23:07:23 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:51.359 23:07:23 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:51.359 23:07:23 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:51.359 23:07:23 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:51.360 23:07:23 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:51.360 23:07:23 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:51.360 23:07:23 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:51.619 [2024-07-24 23:07:23.814021] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.619 [2024-07-24 23:07:23.830079] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:51.619 [2024-07-24 23:07:23.830248] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.619 malloc0 00:21:51.619 23:07:23 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:51.619 23:07:23 -- fips/fips.sh@148 -- # bdevperf_pid=3263274 00:21:51.619 23:07:23 -- fips/fips.sh@149 -- # waitforlisten 3263274 /var/tmp/bdevperf.sock 00:21:51.619 23:07:23 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:51.619 23:07:23 -- common/autotest_common.sh@819 -- # '[' -z 3263274 ']' 00:21:51.619 23:07:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:51.619 23:07:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:51.619 23:07:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:51.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:51.619 23:07:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:51.619 23:07:23 -- common/autotest_common.sh@10 -- # set +x 00:21:51.619 [2024-07-24 23:07:23.940959] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:51.619 [2024-07-24 23:07:23.941008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3263274 ] 00:21:51.619 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.619 [2024-07-24 23:07:24.006589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.619 [2024-07-24 23:07:24.041956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.556 23:07:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:52.556 23:07:24 -- common/autotest_common.sh@852 -- # return 0 00:21:52.556 23:07:24 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:52.556 [2024-07-24 23:07:24.855969] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:52.556 TLSTESTn1 00:21:52.556 23:07:24 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:52.815 Running I/O for 10 seconds... 00:22:02.800 00:22:02.800 Latency(us) 00:22:02.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.800 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:02.800 Verification LBA range: start 0x0 length 0x2000 00:22:02.800 TLSTESTn1 : 10.02 3804.70 14.86 0.00 0.00 33607.34 6527.39 59978.55 00:22:02.800 =================================================================================================================== 00:22:02.800 Total : 3804.70 14.86 0.00 0.00 33607.34 6527.39 59978.55 00:22:02.800 0 00:22:02.800 23:07:35 -- fips/fips.sh@1 -- # cleanup 00:22:02.800 23:07:35 -- fips/fips.sh@15 -- # process_shm --id 0 00:22:02.800 23:07:35 -- common/autotest_common.sh@796 -- # type=--id 00:22:02.800 23:07:35 -- common/autotest_common.sh@797 -- # id=0 00:22:02.800 23:07:35 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:22:02.800 23:07:35 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:02.800 23:07:35 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:22:02.800 23:07:35 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:22:02.800 23:07:35 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:22:02.800 23:07:35 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:02.800 nvmf_trace.0 00:22:02.800 23:07:35 -- common/autotest_common.sh@811 -- # return 0 00:22:02.800 23:07:35 -- fips/fips.sh@16 -- # killprocess 3263274 00:22:02.800 23:07:35 -- common/autotest_common.sh@926 -- # '[' -z 3263274 ']' 00:22:02.800 23:07:35 -- common/autotest_common.sh@930 -- # kill -0 3263274 00:22:02.800 23:07:35 -- common/autotest_common.sh@931 -- # uname 00:22:02.800 23:07:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:02.800 23:07:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3263274 00:22:02.800 23:07:35 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:02.800 23:07:35 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:02.800 23:07:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3263274' 00:22:02.800 killing process with pid 3263274 00:22:02.800 23:07:35 -- common/autotest_common.sh@945 -- # kill 3263274 00:22:02.800 Received shutdown signal, test time was about 10.000000 seconds 00:22:02.800 00:22:02.800 Latency(us) 00:22:02.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.800 =================================================================================================================== 00:22:02.800 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:02.800 23:07:35 -- common/autotest_common.sh@950 -- # wait 3263274 00:22:03.060 23:07:35 -- fips/fips.sh@17 -- # nvmftestfini 00:22:03.060 23:07:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:03.060 23:07:35 -- nvmf/common.sh@116 -- # sync 00:22:03.060 23:07:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:03.060 23:07:35 -- nvmf/common.sh@119 -- # set +e 00:22:03.060 23:07:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:03.060 23:07:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:03.060 rmmod nvme_tcp 00:22:03.060 rmmod nvme_fabrics 00:22:03.060 rmmod nvme_keyring 00:22:03.060 23:07:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:03.060 23:07:35 -- nvmf/common.sh@123 -- # set -e 00:22:03.060 23:07:35 -- nvmf/common.sh@124 -- # return 0 00:22:03.060 23:07:35 -- nvmf/common.sh@477 -- # '[' -n 3263127 ']' 00:22:03.060 23:07:35 -- nvmf/common.sh@478 -- # killprocess 3263127 00:22:03.060 23:07:35 -- common/autotest_common.sh@926 -- # '[' -z 3263127 ']' 00:22:03.060 23:07:35 -- common/autotest_common.sh@930 -- # kill -0 3263127 00:22:03.060 23:07:35 -- common/autotest_common.sh@931 -- # uname 00:22:03.060 23:07:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:03.060 23:07:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3263127 00:22:03.320 23:07:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:03.320 23:07:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:03.320 23:07:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3263127' 00:22:03.320 killing process with pid 3263127 00:22:03.320 23:07:35 -- common/autotest_common.sh@945 -- # kill 3263127 00:22:03.320 23:07:35 -- common/autotest_common.sh@950 -- # wait 3263127 00:22:03.320 23:07:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:03.320 23:07:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:03.320 23:07:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:03.320 23:07:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:03.320 23:07:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:03.320 23:07:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.320 23:07:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:03.320 23:07:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.859 23:07:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:05.859 23:07:37 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:05.859 00:22:05.859 real 0m22.187s 00:22:05.859 user 0m21.515s 00:22:05.859 sys 0m11.406s 00:22:05.859 23:07:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:05.859 23:07:37 -- common/autotest_common.sh@10 -- # set +x 00:22:05.859 ************************************ 00:22:05.859 END TEST nvmf_fips 00:22:05.859 ************************************ 00:22:05.859 23:07:37 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:22:05.859 23:07:37 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:05.859 23:07:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:05.859 23:07:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:05.859 23:07:37 -- common/autotest_common.sh@10 -- # set +x 00:22:05.859 ************************************ 00:22:05.859 START TEST nvmf_fuzz 00:22:05.859 ************************************ 00:22:05.859 23:07:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:05.859 * Looking for test storage... 00:22:05.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:05.859 23:07:37 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.859 23:07:37 -- nvmf/common.sh@7 -- # uname -s 00:22:05.859 23:07:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.859 23:07:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.859 23:07:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.859 23:07:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.859 23:07:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.859 23:07:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.859 23:07:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.859 23:07:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.859 23:07:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.859 23:07:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.859 23:07:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:05.859 23:07:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:05.859 23:07:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.859 23:07:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.859 23:07:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.859 23:07:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.859 23:07:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.859 23:07:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.859 23:07:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.859 23:07:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.859 23:07:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.859 23:07:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.859 23:07:37 -- paths/export.sh@5 -- # export PATH 00:22:05.859 23:07:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.859 23:07:37 -- nvmf/common.sh@46 -- # : 0 00:22:05.859 23:07:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:05.859 23:07:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:05.859 23:07:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:05.859 23:07:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.859 23:07:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.859 23:07:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:05.859 23:07:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:05.859 23:07:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:05.859 23:07:37 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:22:05.859 23:07:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:05.859 23:07:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.859 23:07:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:05.859 23:07:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:05.859 23:07:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:05.859 23:07:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.859 23:07:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.859 23:07:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.859 23:07:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:05.859 23:07:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:05.859 23:07:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:05.859 23:07:37 -- common/autotest_common.sh@10 -- # set +x 00:22:12.430 23:07:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:12.430 23:07:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:12.430 23:07:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:12.430 23:07:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:12.430 23:07:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:12.430 23:07:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:12.430 23:07:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:12.430 23:07:44 -- nvmf/common.sh@294 -- # net_devs=() 00:22:12.430 23:07:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:12.430 23:07:44 -- nvmf/common.sh@295 -- # e810=() 00:22:12.430 23:07:44 -- nvmf/common.sh@295 -- # local -ga e810 00:22:12.430 23:07:44 -- nvmf/common.sh@296 -- # x722=() 00:22:12.430 23:07:44 -- nvmf/common.sh@296 -- # local -ga x722 00:22:12.430 23:07:44 -- nvmf/common.sh@297 -- # mlx=() 00:22:12.430 23:07:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:12.430 23:07:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.430 23:07:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.430 23:07:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.430 23:07:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.430 23:07:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.430 23:07:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.430 23:07:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.430 23:07:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.430 23:07:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.430 23:07:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.430 23:07:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.431 23:07:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:12.431 23:07:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:12.431 23:07:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:12.431 23:07:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:12.431 23:07:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:12.431 23:07:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:12.431 23:07:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:12.431 23:07:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:12.431 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:12.431 23:07:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:12.431 23:07:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:12.431 23:07:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.431 23:07:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.431 23:07:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:12.431 23:07:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:12.431 23:07:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:12.431 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:12.431 23:07:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:12.431 23:07:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:12.431 23:07:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.431 23:07:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.431 23:07:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:12.431 23:07:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:12.431 23:07:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:12.431 23:07:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:12.431 23:07:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:12.431 23:07:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.431 23:07:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:12.431 23:07:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.431 23:07:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:12.431 Found net devices under 0000:af:00.0: cvl_0_0 00:22:12.431 23:07:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.431 23:07:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:12.431 23:07:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.431 23:07:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:12.431 23:07:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.431 23:07:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:12.431 Found net devices under 0000:af:00.1: cvl_0_1 00:22:12.431 23:07:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.431 23:07:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:12.431 23:07:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:12.431 23:07:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:12.431 23:07:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:12.431 23:07:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:12.431 23:07:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.431 23:07:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.431 23:07:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:12.431 23:07:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:12.431 23:07:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:12.431 23:07:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:12.431 23:07:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:12.431 23:07:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:12.431 23:07:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.431 23:07:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:12.431 23:07:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:12.431 23:07:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:12.431 23:07:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:12.431 23:07:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:12.431 23:07:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:12.431 23:07:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:12.431 23:07:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:12.431 23:07:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:12.431 23:07:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:12.431 23:07:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:12.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:22:12.431 00:22:12.431 --- 10.0.0.2 ping statistics --- 00:22:12.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.431 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:22:12.431 23:07:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:12.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:22:12.431 00:22:12.431 --- 10.0.0.1 ping statistics --- 00:22:12.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.431 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:22:12.431 23:07:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.431 23:07:44 -- nvmf/common.sh@410 -- # return 0 00:22:12.431 23:07:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:12.431 23:07:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.431 23:07:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:12.431 23:07:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:12.431 23:07:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.431 23:07:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:12.431 23:07:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:12.431 23:07:44 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3268849 00:22:12.431 23:07:44 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:12.431 23:07:44 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:12.431 23:07:44 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3268849 00:22:12.431 23:07:44 -- common/autotest_common.sh@819 -- # '[' -z 3268849 ']' 00:22:12.431 23:07:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.431 23:07:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:12.431 23:07:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.431 23:07:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:12.431 23:07:44 -- common/autotest_common.sh@10 -- # set +x 00:22:12.999 23:07:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:12.999 23:07:45 -- common/autotest_common.sh@852 -- # return 0 00:22:12.999 23:07:45 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:12.999 23:07:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.999 23:07:45 -- common/autotest_common.sh@10 -- # set +x 00:22:13.258 23:07:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.258 23:07:45 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:13.258 23:07:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.258 23:07:45 -- common/autotest_common.sh@10 -- # set +x 00:22:13.258 Malloc0 00:22:13.258 23:07:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.258 23:07:45 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:13.258 23:07:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.258 23:07:45 -- common/autotest_common.sh@10 -- # set +x 00:22:13.258 23:07:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.258 23:07:45 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:13.258 23:07:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.258 23:07:45 -- common/autotest_common.sh@10 -- # set +x 00:22:13.258 23:07:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.258 23:07:45 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:13.258 23:07:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.258 23:07:45 -- common/autotest_common.sh@10 -- # set +x 00:22:13.258 23:07:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.258 23:07:45 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:22:13.258 23:07:45 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:22:45.375 Fuzzing completed. Shutting down the fuzz application 00:22:45.375 00:22:45.375 Dumping successful admin opcodes: 00:22:45.375 8, 9, 10, 24, 00:22:45.375 Dumping successful io opcodes: 00:22:45.375 0, 9, 00:22:45.375 NS: 0x200003aeff00 I/O qp, Total commands completed: 760665, total successful commands: 4432, random_seed: 1108070016 00:22:45.375 NS: 0x200003aeff00 admin qp, Total commands completed: 86990, total successful commands: 694, random_seed: 3389282176 00:22:45.375 23:08:15 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:22:45.375 Fuzzing completed. Shutting down the fuzz application 00:22:45.375 00:22:45.375 Dumping successful admin opcodes: 00:22:45.375 24, 00:22:45.375 Dumping successful io opcodes: 00:22:45.375 00:22:45.375 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 317655012 00:22:45.375 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 317734794 00:22:45.375 23:08:17 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:45.375 23:08:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.375 23:08:17 -- common/autotest_common.sh@10 -- # set +x 00:22:45.375 23:08:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.375 23:08:17 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:22:45.375 23:08:17 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:22:45.375 23:08:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:45.375 23:08:17 -- nvmf/common.sh@116 -- # sync 00:22:45.375 23:08:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:45.375 23:08:17 -- nvmf/common.sh@119 -- # set +e 00:22:45.375 23:08:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:45.375 23:08:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:45.375 rmmod nvme_tcp 00:22:45.375 rmmod nvme_fabrics 00:22:45.375 rmmod nvme_keyring 00:22:45.375 23:08:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:45.375 23:08:17 -- nvmf/common.sh@123 -- # set -e 00:22:45.375 23:08:17 -- nvmf/common.sh@124 -- # return 0 00:22:45.375 23:08:17 -- nvmf/common.sh@477 -- # '[' -n 3268849 ']' 00:22:45.375 23:08:17 -- nvmf/common.sh@478 -- # killprocess 3268849 00:22:45.375 23:08:17 -- common/autotest_common.sh@926 -- # '[' -z 3268849 ']' 00:22:45.375 23:08:17 -- common/autotest_common.sh@930 -- # kill -0 3268849 00:22:45.375 23:08:17 -- common/autotest_common.sh@931 -- # uname 00:22:45.375 23:08:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:45.375 23:08:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3268849 00:22:45.375 23:08:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:45.375 23:08:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:45.375 23:08:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3268849' 00:22:45.375 killing process with pid 3268849 00:22:45.375 23:08:17 -- common/autotest_common.sh@945 -- # kill 3268849 00:22:45.375 23:08:17 -- common/autotest_common.sh@950 -- # wait 3268849 00:22:45.375 23:08:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:45.375 23:08:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:45.375 23:08:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:45.375 23:08:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:45.375 23:08:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:45.375 23:08:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.375 23:08:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:45.375 23:08:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.281 23:08:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:47.281 23:08:19 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:22:47.281 00:22:47.281 real 0m41.740s 00:22:47.281 user 0m50.618s 00:22:47.281 sys 0m20.266s 00:22:47.281 23:08:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:47.281 23:08:19 -- common/autotest_common.sh@10 -- # set +x 00:22:47.281 ************************************ 00:22:47.281 END TEST nvmf_fuzz 00:22:47.281 ************************************ 00:22:47.281 23:08:19 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:47.281 23:08:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:47.281 23:08:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:47.281 23:08:19 -- common/autotest_common.sh@10 -- # set +x 00:22:47.281 ************************************ 00:22:47.281 START TEST nvmf_multiconnection 00:22:47.281 ************************************ 00:22:47.281 23:08:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:47.281 * Looking for test storage... 00:22:47.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:47.281 23:08:19 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.281 23:08:19 -- nvmf/common.sh@7 -- # uname -s 00:22:47.542 23:08:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.542 23:08:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.542 23:08:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.542 23:08:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.542 23:08:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.542 23:08:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.542 23:08:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.542 23:08:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.542 23:08:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.542 23:08:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.542 23:08:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:47.542 23:08:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:47.542 23:08:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.542 23:08:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.542 23:08:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.542 23:08:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:47.542 23:08:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.542 23:08:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.542 23:08:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.542 23:08:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.542 23:08:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.542 23:08:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.542 23:08:19 -- paths/export.sh@5 -- # export PATH 00:22:47.542 23:08:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.542 23:08:19 -- nvmf/common.sh@46 -- # : 0 00:22:47.542 23:08:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:47.542 23:08:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:47.542 23:08:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:47.542 23:08:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.542 23:08:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.542 23:08:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:47.542 23:08:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:47.542 23:08:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:47.542 23:08:19 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:47.542 23:08:19 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:47.542 23:08:19 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:22:47.542 23:08:19 -- target/multiconnection.sh@16 -- # nvmftestinit 00:22:47.542 23:08:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:47.542 23:08:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.542 23:08:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:47.542 23:08:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:47.542 23:08:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:47.542 23:08:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.542 23:08:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.542 23:08:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.542 23:08:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:47.542 23:08:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:47.542 23:08:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:47.542 23:08:19 -- common/autotest_common.sh@10 -- # set +x 00:22:54.117 23:08:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:54.117 23:08:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:54.117 23:08:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:54.117 23:08:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:54.117 23:08:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:54.117 23:08:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:54.117 23:08:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:54.117 23:08:26 -- nvmf/common.sh@294 -- # net_devs=() 00:22:54.117 23:08:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:54.117 23:08:26 -- nvmf/common.sh@295 -- # e810=() 00:22:54.117 23:08:26 -- nvmf/common.sh@295 -- # local -ga e810 00:22:54.117 23:08:26 -- nvmf/common.sh@296 -- # x722=() 00:22:54.117 23:08:26 -- nvmf/common.sh@296 -- # local -ga x722 00:22:54.117 23:08:26 -- nvmf/common.sh@297 -- # mlx=() 00:22:54.117 23:08:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:54.117 23:08:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.117 23:08:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.117 23:08:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.117 23:08:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.117 23:08:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.117 23:08:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.118 23:08:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.118 23:08:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.118 23:08:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.118 23:08:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.118 23:08:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.118 23:08:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:54.118 23:08:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:54.118 23:08:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:54.118 23:08:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:54.118 23:08:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:54.118 23:08:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:54.118 23:08:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:54.118 23:08:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:54.118 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:54.118 23:08:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:54.118 23:08:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:54.118 23:08:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.118 23:08:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.118 23:08:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:54.118 23:08:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:54.118 23:08:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:54.118 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:54.118 23:08:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:54.118 23:08:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:54.118 23:08:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.118 23:08:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.118 23:08:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:54.118 23:08:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:54.118 23:08:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:54.118 23:08:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:54.118 23:08:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:54.118 23:08:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.118 23:08:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:54.118 23:08:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.118 23:08:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:54.118 Found net devices under 0000:af:00.0: cvl_0_0 00:22:54.118 23:08:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.118 23:08:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:54.118 23:08:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.118 23:08:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:54.118 23:08:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.118 23:08:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:54.118 Found net devices under 0000:af:00.1: cvl_0_1 00:22:54.118 23:08:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.118 23:08:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:54.118 23:08:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:54.118 23:08:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:54.118 23:08:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:54.118 23:08:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:54.118 23:08:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.118 23:08:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.118 23:08:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:54.118 23:08:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:54.118 23:08:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:54.118 23:08:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:54.118 23:08:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:54.118 23:08:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:54.118 23:08:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.118 23:08:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:54.118 23:08:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:54.118 23:08:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:54.118 23:08:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:54.118 23:08:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:54.118 23:08:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:54.118 23:08:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:54.118 23:08:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:54.378 23:08:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:54.378 23:08:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:54.378 23:08:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:54.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:22:54.378 00:22:54.378 --- 10.0.0.2 ping statistics --- 00:22:54.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.378 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:22:54.378 23:08:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:54.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:22:54.378 00:22:54.378 --- 10.0.0.1 ping statistics --- 00:22:54.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.378 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:22:54.378 23:08:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.378 23:08:26 -- nvmf/common.sh@410 -- # return 0 00:22:54.378 23:08:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:54.378 23:08:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.378 23:08:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:54.378 23:08:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:54.378 23:08:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.378 23:08:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:54.378 23:08:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:54.378 23:08:26 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:22:54.378 23:08:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:54.378 23:08:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:54.378 23:08:26 -- common/autotest_common.sh@10 -- # set +x 00:22:54.378 23:08:26 -- nvmf/common.sh@469 -- # nvmfpid=3278127 00:22:54.378 23:08:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:54.378 23:08:26 -- nvmf/common.sh@470 -- # waitforlisten 3278127 00:22:54.378 23:08:26 -- common/autotest_common.sh@819 -- # '[' -z 3278127 ']' 00:22:54.378 23:08:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.378 23:08:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:54.378 23:08:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.378 23:08:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:54.378 23:08:26 -- common/autotest_common.sh@10 -- # set +x 00:22:54.378 [2024-07-24 23:08:26.758137] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:54.378 [2024-07-24 23:08:26.758181] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.378 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.637 [2024-07-24 23:08:26.837030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:54.637 [2024-07-24 23:08:26.875911] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:54.637 [2024-07-24 23:08:26.876039] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.637 [2024-07-24 23:08:26.876049] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.637 [2024-07-24 23:08:26.876058] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.637 [2024-07-24 23:08:26.876110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.637 [2024-07-24 23:08:26.876129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.637 [2024-07-24 23:08:26.876148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:54.637 [2024-07-24 23:08:26.876154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.206 23:08:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:55.206 23:08:27 -- common/autotest_common.sh@852 -- # return 0 00:22:55.206 23:08:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:55.206 23:08:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:55.206 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.206 23:08:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.206 23:08:27 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:55.206 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.206 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.206 [2024-07-24 23:08:27.609158] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.206 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.206 23:08:27 -- target/multiconnection.sh@21 -- # seq 1 11 00:22:55.206 23:08:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.206 23:08:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:55.206 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.206 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.466 Malloc1 00:22:55.466 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.466 23:08:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:22:55.466 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.466 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.466 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.466 23:08:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:55.466 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.466 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.466 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.466 23:08:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:55.466 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.466 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.466 [2024-07-24 23:08:27.672063] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.466 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.466 23:08:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.466 23:08:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:22:55.466 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.466 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.466 Malloc2 00:22:55.466 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.466 23:08:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:55.466 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.466 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.466 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.466 23:08:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:22:55.466 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.466 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.466 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.466 23:08:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:55.466 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.466 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.466 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.466 23:08:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.466 23:08:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:22:55.466 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.466 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.466 Malloc3 00:22:55.466 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.466 23:08:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:22:55.466 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.466 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.466 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.466 23:08:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:22:55.466 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.466 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.466 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.466 23:08:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:22:55.466 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.466 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.466 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.466 23:08:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.466 23:08:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:55.466 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.466 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.466 Malloc4 00:22:55.466 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.466 23:08:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:22:55.466 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.466 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.466 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.466 23:08:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:55.467 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.467 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.467 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.467 23:08:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:22:55.467 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.467 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.467 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.467 23:08:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.467 23:08:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:55.467 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.467 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.467 Malloc5 00:22:55.467 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.467 23:08:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:22:55.467 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.467 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.467 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.467 23:08:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:55.467 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.467 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.467 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.467 23:08:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:22:55.467 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.467 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.467 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.467 23:08:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.467 23:08:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:22:55.467 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.467 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.467 Malloc6 00:22:55.467 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.467 23:08:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:22:55.467 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.467 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.467 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.467 23:08:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:22:55.467 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.467 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.467 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.467 23:08:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:22:55.467 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.467 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.467 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.467 23:08:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.467 23:08:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:22:55.467 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.467 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.727 Malloc7 00:22:55.727 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.727 23:08:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:22:55.727 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.727 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.727 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.727 23:08:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:22:55.727 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.727 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.727 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.727 23:08:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:22:55.727 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.727 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.727 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.727 23:08:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.727 23:08:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:55.727 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.727 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.727 Malloc8 00:22:55.727 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.727 23:08:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:55.727 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.727 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.727 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.727 23:08:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:55.727 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.727 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.727 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.727 23:08:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:22:55.727 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.727 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.727 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.727 23:08:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.727 23:08:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:55.727 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.727 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.727 Malloc9 00:22:55.727 23:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.727 23:08:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:55.727 23:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.727 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.727 23:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.728 23:08:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:55.728 23:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.728 23:08:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.728 23:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.728 23:08:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:22:55.728 23:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.728 23:08:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.728 23:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.728 23:08:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.728 23:08:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:55.728 23:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.728 23:08:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.728 Malloc10 00:22:55.728 23:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.728 23:08:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:55.728 23:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.728 23:08:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.728 23:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.728 23:08:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:55.728 23:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.728 23:08:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.728 23:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.728 23:08:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:22:55.728 23:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.728 23:08:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.728 23:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.728 23:08:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.728 23:08:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:55.728 23:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.728 23:08:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.728 Malloc11 00:22:55.728 23:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.728 23:08:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:55.728 23:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.728 23:08:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.728 23:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.728 23:08:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:55.728 23:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.728 23:08:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.728 23:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.728 23:08:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:22:55.728 23:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.728 23:08:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.728 23:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.728 23:08:28 -- target/multiconnection.sh@28 -- # seq 1 11 00:22:55.728 23:08:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.728 23:08:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:57.108 23:08:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:57.108 23:08:29 -- common/autotest_common.sh@1177 -- # local i=0 00:22:57.108 23:08:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:57.108 23:08:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:57.108 23:08:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:59.644 23:08:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:59.644 23:08:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:59.644 23:08:31 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:22:59.644 23:08:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:59.644 23:08:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:59.644 23:08:31 -- common/autotest_common.sh@1187 -- # return 0 00:22:59.644 23:08:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:59.644 23:08:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:23:00.582 23:08:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:00.582 23:08:32 -- common/autotest_common.sh@1177 -- # local i=0 00:23:00.582 23:08:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:00.582 23:08:32 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:00.582 23:08:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:02.561 23:08:34 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:02.561 23:08:34 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:02.561 23:08:34 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:23:02.561 23:08:34 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:02.561 23:08:34 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:02.561 23:08:34 -- common/autotest_common.sh@1187 -- # return 0 00:23:02.561 23:08:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:02.561 23:08:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:23:03.940 23:08:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:23:03.940 23:08:36 -- common/autotest_common.sh@1177 -- # local i=0 00:23:03.940 23:08:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:03.940 23:08:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:03.940 23:08:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:05.847 23:08:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:05.847 23:08:38 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:23:05.847 23:08:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:05.847 23:08:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:05.847 23:08:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:05.847 23:08:38 -- common/autotest_common.sh@1187 -- # return 0 00:23:05.847 23:08:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:05.847 23:08:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:23:07.226 23:08:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:23:07.226 23:08:39 -- common/autotest_common.sh@1177 -- # local i=0 00:23:07.226 23:08:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:07.226 23:08:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:07.226 23:08:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:09.762 23:08:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:09.763 23:08:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:09.763 23:08:41 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:23:09.763 23:08:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:09.763 23:08:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:09.763 23:08:41 -- common/autotest_common.sh@1187 -- # return 0 00:23:09.763 23:08:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:09.763 23:08:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:23:10.701 23:08:43 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:23:10.701 23:08:43 -- common/autotest_common.sh@1177 -- # local i=0 00:23:10.701 23:08:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:10.701 23:08:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:10.701 23:08:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:13.234 23:08:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:13.234 23:08:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:13.234 23:08:45 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:23:13.234 23:08:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:13.234 23:08:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:13.234 23:08:45 -- common/autotest_common.sh@1187 -- # return 0 00:23:13.235 23:08:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:13.235 23:08:45 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:23:14.611 23:08:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:23:14.611 23:08:46 -- common/autotest_common.sh@1177 -- # local i=0 00:23:14.611 23:08:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:14.611 23:08:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:14.611 23:08:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:16.517 23:08:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:16.517 23:08:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:16.517 23:08:48 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:23:16.517 23:08:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:16.517 23:08:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:16.517 23:08:48 -- common/autotest_common.sh@1187 -- # return 0 00:23:16.517 23:08:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.517 23:08:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:23:17.896 23:08:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:23:17.896 23:08:50 -- common/autotest_common.sh@1177 -- # local i=0 00:23:17.896 23:08:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:17.896 23:08:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:17.896 23:08:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:19.803 23:08:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:19.803 23:08:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:19.803 23:08:52 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:23:19.803 23:08:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:19.803 23:08:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:19.803 23:08:52 -- common/autotest_common.sh@1187 -- # return 0 00:23:19.803 23:08:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:19.803 23:08:52 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:23:21.708 23:08:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:23:21.708 23:08:53 -- common/autotest_common.sh@1177 -- # local i=0 00:23:21.708 23:08:53 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:21.708 23:08:53 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:21.708 23:08:53 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:23.683 23:08:55 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:23.683 23:08:55 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:23.683 23:08:55 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:23:23.683 23:08:55 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:23.683 23:08:55 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:23.683 23:08:55 -- common/autotest_common.sh@1187 -- # return 0 00:23:23.683 23:08:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.683 23:08:55 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:23:25.060 23:08:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:23:25.060 23:08:57 -- common/autotest_common.sh@1177 -- # local i=0 00:23:25.060 23:08:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:25.060 23:08:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:25.060 23:08:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:26.964 23:08:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:26.964 23:08:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:26.964 23:08:59 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:23:26.964 23:08:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:26.964 23:08:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:26.964 23:08:59 -- common/autotest_common.sh@1187 -- # return 0 00:23:26.964 23:08:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:26.964 23:08:59 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:23:28.870 23:09:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:23:28.870 23:09:00 -- common/autotest_common.sh@1177 -- # local i=0 00:23:28.870 23:09:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:28.870 23:09:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:28.870 23:09:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:30.773 23:09:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:30.773 23:09:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:30.773 23:09:02 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:23:30.773 23:09:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:30.773 23:09:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:30.773 23:09:02 -- common/autotest_common.sh@1187 -- # return 0 00:23:30.773 23:09:02 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.773 23:09:02 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:23:32.675 23:09:04 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:23:32.675 23:09:04 -- common/autotest_common.sh@1177 -- # local i=0 00:23:32.675 23:09:04 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:32.675 23:09:04 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:32.675 23:09:04 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:34.581 23:09:06 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:34.581 23:09:06 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:34.581 23:09:06 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:23:34.581 23:09:06 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:34.581 23:09:06 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:34.581 23:09:06 -- common/autotest_common.sh@1187 -- # return 0 00:23:34.581 23:09:06 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:23:34.581 [global] 00:23:34.581 thread=1 00:23:34.581 invalidate=1 00:23:34.581 rw=read 00:23:34.581 time_based=1 00:23:34.581 runtime=10 00:23:34.581 ioengine=libaio 00:23:34.581 direct=1 00:23:34.581 bs=262144 00:23:34.581 iodepth=64 00:23:34.581 norandommap=1 00:23:34.581 numjobs=1 00:23:34.581 00:23:34.581 [job0] 00:23:34.581 filename=/dev/nvme0n1 00:23:34.581 [job1] 00:23:34.581 filename=/dev/nvme10n1 00:23:34.581 [job2] 00:23:34.581 filename=/dev/nvme1n1 00:23:34.581 [job3] 00:23:34.581 filename=/dev/nvme2n1 00:23:34.581 [job4] 00:23:34.581 filename=/dev/nvme3n1 00:23:34.581 [job5] 00:23:34.581 filename=/dev/nvme4n1 00:23:34.581 [job6] 00:23:34.581 filename=/dev/nvme5n1 00:23:34.581 [job7] 00:23:34.581 filename=/dev/nvme6n1 00:23:34.581 [job8] 00:23:34.581 filename=/dev/nvme7n1 00:23:34.581 [job9] 00:23:34.581 filename=/dev/nvme8n1 00:23:34.581 [job10] 00:23:34.581 filename=/dev/nvme9n1 00:23:34.581 Could not set queue depth (nvme0n1) 00:23:34.581 Could not set queue depth (nvme10n1) 00:23:34.581 Could not set queue depth (nvme1n1) 00:23:34.581 Could not set queue depth (nvme2n1) 00:23:34.581 Could not set queue depth (nvme3n1) 00:23:34.581 Could not set queue depth (nvme4n1) 00:23:34.581 Could not set queue depth (nvme5n1) 00:23:34.581 Could not set queue depth (nvme6n1) 00:23:34.581 Could not set queue depth (nvme7n1) 00:23:34.581 Could not set queue depth (nvme8n1) 00:23:34.581 Could not set queue depth (nvme9n1) 00:23:34.840 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:34.840 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:34.840 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:34.840 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:34.840 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:34.840 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:34.840 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:34.840 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:34.840 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:34.840 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:34.840 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:34.840 fio-3.35 00:23:34.840 Starting 11 threads 00:23:47.055 00:23:47.055 job0: (groupid=0, jobs=1): err= 0: pid=3285796: Wed Jul 24 23:09:17 2024 00:23:47.055 read: IOPS=816, BW=204MiB/s (214MB/s)(2045MiB/10019msec) 00:23:47.055 slat (usec): min=9, max=105798, avg=943.95, stdev=3117.83 00:23:47.055 clat (msec): min=2, max=177, avg=77.39, stdev=26.71 00:23:47.055 lat (msec): min=2, max=268, avg=78.33, stdev=27.03 00:23:47.055 clat percentiles (msec): 00:23:47.055 | 1.00th=[ 20], 5.00th=[ 32], 10.00th=[ 44], 20.00th=[ 56], 00:23:47.055 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 77], 60.00th=[ 84], 00:23:47.055 | 70.00th=[ 92], 80.00th=[ 100], 90.00th=[ 110], 95.00th=[ 118], 00:23:47.055 | 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 174], 99.95th=[ 176], 00:23:47.055 | 99.99th=[ 178] 00:23:47.055 bw ( KiB/s): min=142336, max=311808, per=8.36%, avg=207722.35, stdev=45830.10, samples=20 00:23:47.055 iops : min= 556, max= 1218, avg=811.40, stdev=179.02, samples=20 00:23:47.055 lat (msec) : 4=0.09%, 10=0.21%, 20=0.87%, 50=12.89%, 100=67.33% 00:23:47.055 lat (msec) : 250=18.62% 00:23:47.055 cpu : usr=0.40%, sys=3.16%, ctx=2147, majf=0, minf=4097 00:23:47.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:47.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.055 issued rwts: total=8178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.055 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.055 job1: (groupid=0, jobs=1): err= 0: pid=3285799: Wed Jul 24 23:09:17 2024 00:23:47.055 read: IOPS=808, BW=202MiB/s (212MB/s)(2036MiB/10072msec) 00:23:47.055 slat (usec): min=14, max=91418, avg=960.85, stdev=3190.68 00:23:47.055 clat (msec): min=2, max=188, avg=78.11, stdev=30.53 00:23:47.055 lat (msec): min=2, max=220, avg=79.07, stdev=30.87 00:23:47.055 clat percentiles (msec): 00:23:47.055 | 1.00th=[ 9], 5.00th=[ 23], 10.00th=[ 35], 20.00th=[ 59], 00:23:47.055 | 30.00th=[ 68], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 84], 00:23:47.055 | 70.00th=[ 91], 80.00th=[ 99], 90.00th=[ 113], 95.00th=[ 136], 00:23:47.055 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 184], 99.95th=[ 184], 00:23:47.055 | 99.99th=[ 190] 00:23:47.055 bw ( KiB/s): min=145920, max=282624, per=8.32%, avg=206853.10, stdev=37525.47, samples=20 00:23:47.055 iops : min= 570, max= 1104, avg=808.00, stdev=146.58, samples=20 00:23:47.055 lat (msec) : 4=0.16%, 10=1.41%, 20=2.74%, 50=11.28%, 100=66.69% 00:23:47.055 lat (msec) : 250=17.72% 00:23:47.055 cpu : usr=0.34%, sys=3.82%, ctx=2032, majf=0, minf=4097 00:23:47.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:47.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.055 issued rwts: total=8144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.055 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.055 job2: (groupid=0, jobs=1): err= 0: pid=3285800: Wed Jul 24 23:09:17 2024 00:23:47.055 read: IOPS=837, BW=209MiB/s (220MB/s)(2109MiB/10068msec) 00:23:47.055 slat (usec): min=11, max=50754, avg=1128.31, stdev=3008.32 00:23:47.055 clat (msec): min=5, max=195, avg=75.18, stdev=23.57 00:23:47.055 lat (msec): min=5, max=202, avg=76.31, stdev=23.91 00:23:47.055 clat percentiles (msec): 00:23:47.055 | 1.00th=[ 29], 5.00th=[ 42], 10.00th=[ 50], 20.00th=[ 58], 00:23:47.055 | 30.00th=[ 64], 40.00th=[ 68], 50.00th=[ 73], 60.00th=[ 78], 00:23:47.055 | 70.00th=[ 83], 80.00th=[ 91], 90.00th=[ 103], 95.00th=[ 117], 00:23:47.055 | 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 180], 00:23:47.055 | 99.99th=[ 197] 00:23:47.055 bw ( KiB/s): min=132096, max=287744, per=8.62%, avg=214275.70, stdev=44718.43, samples=20 00:23:47.055 iops : min= 516, max= 1124, avg=837.00, stdev=174.68, samples=20 00:23:47.055 lat (msec) : 10=0.05%, 20=0.26%, 50=10.20%, 100=78.40%, 250=11.10% 00:23:47.055 cpu : usr=0.38%, sys=4.02%, ctx=1859, majf=0, minf=4097 00:23:47.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:47.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.055 issued rwts: total=8435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.055 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.055 job3: (groupid=0, jobs=1): err= 0: pid=3285801: Wed Jul 24 23:09:17 2024 00:23:47.055 read: IOPS=830, BW=208MiB/s (218MB/s)(2092MiB/10069msec) 00:23:47.055 slat (usec): min=11, max=63936, avg=871.95, stdev=2897.13 00:23:47.055 clat (msec): min=2, max=185, avg=76.06, stdev=28.47 00:23:47.055 lat (msec): min=2, max=210, avg=76.94, stdev=28.85 00:23:47.055 clat percentiles (msec): 00:23:47.055 | 1.00th=[ 8], 5.00th=[ 28], 10.00th=[ 44], 20.00th=[ 54], 00:23:47.055 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 75], 60.00th=[ 83], 00:23:47.055 | 70.00th=[ 92], 80.00th=[ 100], 90.00th=[ 110], 95.00th=[ 121], 00:23:47.055 | 99.00th=[ 159], 99.50th=[ 171], 99.90th=[ 176], 99.95th=[ 176], 00:23:47.055 | 99.99th=[ 186] 00:23:47.055 bw ( KiB/s): min=143872, max=306176, per=8.55%, avg=212558.60, stdev=50202.38, samples=20 00:23:47.055 iops : min= 562, max= 1196, avg=830.25, stdev=196.06, samples=20 00:23:47.055 lat (msec) : 4=0.08%, 10=2.14%, 20=1.40%, 50=12.14%, 100=64.73% 00:23:47.055 lat (msec) : 250=19.51% 00:23:47.055 cpu : usr=0.46%, sys=3.39%, ctx=2262, majf=0, minf=4097 00:23:47.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:47.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.055 issued rwts: total=8367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.055 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.055 job4: (groupid=0, jobs=1): err= 0: pid=3285802: Wed Jul 24 23:09:17 2024 00:23:47.055 read: IOPS=953, BW=238MiB/s (250MB/s)(2399MiB/10066msec) 00:23:47.055 slat (usec): min=9, max=65787, avg=627.27, stdev=2429.68 00:23:47.055 clat (msec): min=2, max=170, avg=66.42, stdev=28.67 00:23:47.055 lat (msec): min=2, max=192, avg=67.05, stdev=28.90 00:23:47.055 clat percentiles (msec): 00:23:47.055 | 1.00th=[ 8], 5.00th=[ 21], 10.00th=[ 33], 20.00th=[ 44], 00:23:47.055 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 64], 60.00th=[ 72], 00:23:47.055 | 70.00th=[ 82], 80.00th=[ 91], 90.00th=[ 105], 95.00th=[ 117], 00:23:47.055 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 161], 99.95th=[ 163], 00:23:47.055 | 99.99th=[ 171] 00:23:47.055 bw ( KiB/s): min=138752, max=392942, per=9.82%, avg=244005.50, stdev=64178.70, samples=20 00:23:47.055 iops : min= 542, max= 1534, avg=953.10, stdev=250.58, samples=20 00:23:47.055 lat (msec) : 4=0.14%, 10=1.52%, 20=3.38%, 50=26.20%, 100=56.44% 00:23:47.055 lat (msec) : 250=12.33% 00:23:47.055 cpu : usr=0.50%, sys=3.63%, ctx=2748, majf=0, minf=4097 00:23:47.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:23:47.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.055 issued rwts: total=9596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.055 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.055 job5: (groupid=0, jobs=1): err= 0: pid=3285809: Wed Jul 24 23:09:17 2024 00:23:47.055 read: IOPS=1015, BW=254MiB/s (266MB/s)(2561MiB/10087msec) 00:23:47.055 slat (usec): min=8, max=62922, avg=752.79, stdev=2425.50 00:23:47.055 clat (msec): min=2, max=185, avg=62.20, stdev=28.57 00:23:47.055 lat (msec): min=3, max=185, avg=62.96, stdev=28.91 00:23:47.055 clat percentiles (msec): 00:23:47.055 | 1.00th=[ 11], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 38], 00:23:47.055 | 30.00th=[ 46], 40.00th=[ 53], 50.00th=[ 59], 60.00th=[ 66], 00:23:47.055 | 70.00th=[ 72], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 115], 00:23:47.055 | 99.00th=[ 155], 99.50th=[ 161], 99.90th=[ 176], 99.95th=[ 180], 00:23:47.055 | 99.99th=[ 186] 00:23:47.055 bw ( KiB/s): min=119296, max=435200, per=10.48%, avg=260576.30, stdev=81691.39, samples=20 00:23:47.055 iops : min= 466, max= 1700, avg=1017.80, stdev=319.12, samples=20 00:23:47.055 lat (msec) : 4=0.05%, 10=0.80%, 20=1.80%, 50=34.21%, 100=53.93% 00:23:47.055 lat (msec) : 250=9.21% 00:23:47.055 cpu : usr=0.52%, sys=4.08%, ctx=2540, majf=0, minf=4097 00:23:47.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:47.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.055 issued rwts: total=10242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.055 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.055 job6: (groupid=0, jobs=1): err= 0: pid=3285810: Wed Jul 24 23:09:17 2024 00:23:47.055 read: IOPS=899, BW=225MiB/s (236MB/s)(2267MiB/10083msec) 00:23:47.055 slat (usec): min=9, max=109417, avg=908.71, stdev=2968.90 00:23:47.055 clat (msec): min=2, max=185, avg=70.16, stdev=27.50 00:23:47.055 lat (msec): min=2, max=202, avg=71.07, stdev=27.80 00:23:47.055 clat percentiles (msec): 00:23:47.055 | 1.00th=[ 11], 5.00th=[ 26], 10.00th=[ 43], 20.00th=[ 53], 00:23:47.055 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 67], 60.00th=[ 71], 00:23:47.055 | 70.00th=[ 79], 80.00th=[ 89], 90.00th=[ 106], 95.00th=[ 125], 00:23:47.055 | 99.00th=[ 155], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 184], 00:23:47.055 | 99.99th=[ 186] 00:23:47.055 bw ( KiB/s): min=119296, max=312832, per=9.27%, avg=230530.90, stdev=52531.68, samples=20 00:23:47.055 iops : min= 466, max= 1222, avg=900.50, stdev=205.20, samples=20 00:23:47.055 lat (msec) : 4=0.07%, 10=0.78%, 20=2.59%, 50=12.99%, 100=71.36% 00:23:47.055 lat (msec) : 250=12.21% 00:23:47.055 cpu : usr=0.49%, sys=3.97%, ctx=2174, majf=0, minf=4097 00:23:47.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:47.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.055 issued rwts: total=9069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.055 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.055 job7: (groupid=0, jobs=1): err= 0: pid=3285811: Wed Jul 24 23:09:17 2024 00:23:47.056 read: IOPS=945, BW=236MiB/s (248MB/s)(2369MiB/10021msec) 00:23:47.056 slat (usec): min=11, max=68159, avg=1017.17, stdev=2828.53 00:23:47.056 clat (msec): min=4, max=194, avg=66.61, stdev=25.23 00:23:47.056 lat (msec): min=4, max=194, avg=67.62, stdev=25.58 00:23:47.056 clat percentiles (msec): 00:23:47.056 | 1.00th=[ 20], 5.00th=[ 29], 10.00th=[ 33], 20.00th=[ 43], 00:23:47.056 | 30.00th=[ 55], 40.00th=[ 63], 50.00th=[ 67], 60.00th=[ 71], 00:23:47.056 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 101], 95.00th=[ 114], 00:23:47.056 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 159], 99.95th=[ 165], 00:23:47.056 | 99.99th=[ 194] 00:23:47.056 bw ( KiB/s): min=147968, max=421888, per=9.69%, avg=240898.60, stdev=73388.38, samples=20 00:23:47.056 iops : min= 578, max= 1648, avg=941.00, stdev=286.67, samples=20 00:23:47.056 lat (msec) : 10=0.17%, 20=1.02%, 50=24.77%, 100=63.92%, 250=10.11% 00:23:47.056 cpu : usr=0.51%, sys=3.90%, ctx=1973, majf=0, minf=3222 00:23:47.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:23:47.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.056 issued rwts: total=9474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.056 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.056 job8: (groupid=0, jobs=1): err= 0: pid=3285812: Wed Jul 24 23:09:17 2024 00:23:47.056 read: IOPS=947, BW=237MiB/s (248MB/s)(2388MiB/10078msec) 00:23:47.056 slat (usec): min=9, max=142884, avg=720.40, stdev=2878.51 00:23:47.056 clat (msec): min=2, max=185, avg=66.73, stdev=28.78 00:23:47.056 lat (msec): min=2, max=189, avg=67.45, stdev=28.99 00:23:47.056 clat percentiles (msec): 00:23:47.056 | 1.00th=[ 8], 5.00th=[ 17], 10.00th=[ 27], 20.00th=[ 43], 00:23:47.056 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 73], 00:23:47.056 | 70.00th=[ 80], 80.00th=[ 88], 90.00th=[ 99], 95.00th=[ 113], 00:23:47.056 | 99.00th=[ 146], 99.50th=[ 163], 99.90th=[ 184], 99.95th=[ 184], 00:23:47.056 | 99.99th=[ 186] 00:23:47.056 bw ( KiB/s): min=184832, max=392192, per=9.77%, avg=242891.20, stdev=51435.61, samples=20 00:23:47.056 iops : min= 722, max= 1532, avg=948.75, stdev=200.95, samples=20 00:23:47.056 lat (msec) : 4=0.12%, 10=1.76%, 20=4.75%, 50=18.55%, 100=65.42% 00:23:47.056 lat (msec) : 250=9.40% 00:23:47.056 cpu : usr=0.39%, sys=3.81%, ctx=2524, majf=0, minf=4097 00:23:47.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:23:47.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.056 issued rwts: total=9552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.056 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.056 job9: (groupid=0, jobs=1): err= 0: pid=3285813: Wed Jul 24 23:09:17 2024 00:23:47.056 read: IOPS=905, BW=226MiB/s (237MB/s)(2282MiB/10079msec) 00:23:47.056 slat (usec): min=9, max=39604, avg=566.00, stdev=2119.98 00:23:47.056 clat (usec): min=1655, max=161249, avg=70025.51, stdev=27072.06 00:23:47.056 lat (usec): min=1917, max=161289, avg=70591.51, stdev=27288.40 00:23:47.056 clat percentiles (msec): 00:23:47.056 | 1.00th=[ 8], 5.00th=[ 21], 10.00th=[ 34], 20.00th=[ 50], 00:23:47.056 | 30.00th=[ 58], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 77], 00:23:47.056 | 70.00th=[ 83], 80.00th=[ 93], 90.00th=[ 105], 95.00th=[ 117], 00:23:47.056 | 99.00th=[ 132], 99.50th=[ 140], 99.90th=[ 157], 99.95th=[ 159], 00:23:47.056 | 99.99th=[ 161] 00:23:47.056 bw ( KiB/s): min=168960, max=381440, per=9.33%, avg=232014.15, stdev=49129.63, samples=20 00:23:47.056 iops : min= 660, max= 1490, avg=906.30, stdev=191.91, samples=20 00:23:47.056 lat (msec) : 2=0.01%, 4=0.32%, 10=1.33%, 20=3.19%, 50=15.33% 00:23:47.056 lat (msec) : 100=66.32%, 250=13.51% 00:23:47.056 cpu : usr=0.43%, sys=3.49%, ctx=2919, majf=0, minf=4097 00:23:47.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:47.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.056 issued rwts: total=9127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.056 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.056 job10: (groupid=0, jobs=1): err= 0: pid=3285814: Wed Jul 24 23:09:17 2024 00:23:47.056 read: IOPS=768, BW=192MiB/s (201MB/s)(1938MiB/10085msec) 00:23:47.056 slat (usec): min=9, max=55084, avg=798.16, stdev=2997.13 00:23:47.056 clat (msec): min=5, max=231, avg=82.38, stdev=30.47 00:23:47.056 lat (msec): min=5, max=231, avg=83.18, stdev=30.77 00:23:47.056 clat percentiles (msec): 00:23:47.056 | 1.00th=[ 10], 5.00th=[ 28], 10.00th=[ 39], 20.00th=[ 63], 00:23:47.056 | 30.00th=[ 71], 40.00th=[ 77], 50.00th=[ 83], 60.00th=[ 90], 00:23:47.056 | 70.00th=[ 97], 80.00th=[ 106], 90.00th=[ 118], 95.00th=[ 133], 00:23:47.056 | 99.00th=[ 157], 99.50th=[ 174], 99.90th=[ 182], 99.95th=[ 184], 00:23:47.056 | 99.99th=[ 232] 00:23:47.056 bw ( KiB/s): min=117248, max=304128, per=7.92%, avg=196787.20, stdev=41353.65, samples=20 00:23:47.056 iops : min= 458, max= 1188, avg=768.70, stdev=161.54, samples=20 00:23:47.056 lat (msec) : 10=1.03%, 20=1.83%, 50=10.88%, 100=60.43%, 250=25.83% 00:23:47.056 cpu : usr=0.41%, sys=3.06%, ctx=2315, majf=0, minf=4097 00:23:47.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:47.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.056 issued rwts: total=7751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.056 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.056 00:23:47.056 Run status group 0 (all jobs): 00:23:47.056 READ: bw=2427MiB/s (2545MB/s), 192MiB/s-254MiB/s (201MB/s-266MB/s), io=23.9GiB (25.7GB), run=10019-10087msec 00:23:47.056 00:23:47.056 Disk stats (read/write): 00:23:47.056 nvme0n1: ios=15761/0, merge=0/0, ticks=1217990/0, in_queue=1217990, util=95.89% 00:23:47.056 nvme10n1: ios=16207/0, merge=0/0, ticks=1244682/0, in_queue=1244682, util=96.30% 00:23:47.056 nvme1n1: ios=16773/0, merge=0/0, ticks=1236410/0, in_queue=1236410, util=96.74% 00:23:47.056 nvme2n1: ios=16637/0, merge=0/0, ticks=1244968/0, in_queue=1244968, util=96.97% 00:23:47.056 nvme3n1: ios=19138/0, merge=0/0, ticks=1252416/0, in_queue=1252416, util=97.08% 00:23:47.056 nvme4n1: ios=20388/0, merge=0/0, ticks=1245143/0, in_queue=1245143, util=97.65% 00:23:47.056 nvme5n1: ios=18086/0, merge=0/0, ticks=1246466/0, in_queue=1246466, util=97.94% 00:23:47.056 nvme6n1: ios=18458/0, merge=0/0, ticks=1210903/0, in_queue=1210903, util=98.09% 00:23:47.056 nvme7n1: ios=19030/0, merge=0/0, ticks=1249107/0, in_queue=1249107, util=98.76% 00:23:47.056 nvme8n1: ios=18195/0, merge=0/0, ticks=1256135/0, in_queue=1256135, util=99.06% 00:23:47.056 nvme9n1: ios=15393/0, merge=0/0, ticks=1245624/0, in_queue=1245624, util=99.25% 00:23:47.056 23:09:17 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:23:47.056 [global] 00:23:47.056 thread=1 00:23:47.056 invalidate=1 00:23:47.056 rw=randwrite 00:23:47.056 time_based=1 00:23:47.056 runtime=10 00:23:47.056 ioengine=libaio 00:23:47.056 direct=1 00:23:47.056 bs=262144 00:23:47.056 iodepth=64 00:23:47.056 norandommap=1 00:23:47.056 numjobs=1 00:23:47.056 00:23:47.056 [job0] 00:23:47.056 filename=/dev/nvme0n1 00:23:47.056 [job1] 00:23:47.056 filename=/dev/nvme10n1 00:23:47.056 [job2] 00:23:47.056 filename=/dev/nvme1n1 00:23:47.056 [job3] 00:23:47.056 filename=/dev/nvme2n1 00:23:47.056 [job4] 00:23:47.056 filename=/dev/nvme3n1 00:23:47.056 [job5] 00:23:47.056 filename=/dev/nvme4n1 00:23:47.056 [job6] 00:23:47.056 filename=/dev/nvme5n1 00:23:47.056 [job7] 00:23:47.056 filename=/dev/nvme6n1 00:23:47.056 [job8] 00:23:47.056 filename=/dev/nvme7n1 00:23:47.056 [job9] 00:23:47.056 filename=/dev/nvme8n1 00:23:47.056 [job10] 00:23:47.056 filename=/dev/nvme9n1 00:23:47.056 Could not set queue depth (nvme0n1) 00:23:47.056 Could not set queue depth (nvme10n1) 00:23:47.056 Could not set queue depth (nvme1n1) 00:23:47.056 Could not set queue depth (nvme2n1) 00:23:47.056 Could not set queue depth (nvme3n1) 00:23:47.056 Could not set queue depth (nvme4n1) 00:23:47.056 Could not set queue depth (nvme5n1) 00:23:47.056 Could not set queue depth (nvme6n1) 00:23:47.056 Could not set queue depth (nvme7n1) 00:23:47.056 Could not set queue depth (nvme8n1) 00:23:47.056 Could not set queue depth (nvme9n1) 00:23:47.056 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.056 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.056 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.056 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.056 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.056 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.056 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.056 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.056 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.056 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.056 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.056 fio-3.35 00:23:47.056 Starting 11 threads 00:23:57.065 00:23:57.065 job0: (groupid=0, jobs=1): err= 0: pid=3287531: Wed Jul 24 23:09:29 2024 00:23:57.065 write: IOPS=652, BW=163MiB/s (171MB/s)(1648MiB/10102msec); 0 zone resets 00:23:57.065 slat (usec): min=25, max=62184, avg=1258.71, stdev=2966.71 00:23:57.065 clat (msec): min=3, max=218, avg=96.76, stdev=42.46 00:23:57.065 lat (msec): min=3, max=218, avg=98.02, stdev=43.01 00:23:57.065 clat percentiles (msec): 00:23:57.065 | 1.00th=[ 22], 5.00th=[ 38], 10.00th=[ 40], 20.00th=[ 43], 00:23:57.065 | 30.00th=[ 74], 40.00th=[ 90], 50.00th=[ 97], 60.00th=[ 104], 00:23:57.065 | 70.00th=[ 127], 80.00th=[ 140], 90.00th=[ 153], 95.00th=[ 163], 00:23:57.065 | 99.00th=[ 184], 99.50th=[ 201], 99.90th=[ 213], 99.95th=[ 213], 00:23:57.065 | 99.99th=[ 220] 00:23:57.065 bw ( KiB/s): min=108032, max=377344, per=9.32%, avg=167142.40, stdev=68604.88, samples=20 00:23:57.065 iops : min= 422, max= 1474, avg=652.90, stdev=267.99, samples=20 00:23:57.065 lat (msec) : 4=0.02%, 10=0.35%, 20=0.52%, 50=21.48%, 100=35.82% 00:23:57.065 lat (msec) : 250=41.82% 00:23:57.065 cpu : usr=1.56%, sys=2.12%, ctx=2709, majf=0, minf=1 00:23:57.065 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:23:57.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.065 issued rwts: total=0,6592,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.065 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.065 job1: (groupid=0, jobs=1): err= 0: pid=3287535: Wed Jul 24 23:09:29 2024 00:23:57.065 write: IOPS=645, BW=161MiB/s (169MB/s)(1628MiB/10086msec); 0 zone resets 00:23:57.065 slat (usec): min=22, max=54426, avg=1125.70, stdev=2744.64 00:23:57.065 clat (usec): min=1373, max=201670, avg=97988.56, stdev=36992.10 00:23:57.065 lat (usec): min=1443, max=204249, avg=99114.27, stdev=37486.55 00:23:57.065 clat percentiles (msec): 00:23:57.065 | 1.00th=[ 7], 5.00th=[ 26], 10.00th=[ 47], 20.00th=[ 71], 00:23:57.065 | 30.00th=[ 84], 40.00th=[ 94], 50.00th=[ 99], 60.00th=[ 106], 00:23:57.065 | 70.00th=[ 120], 80.00th=[ 131], 90.00th=[ 146], 95.00th=[ 155], 00:23:57.065 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 190], 99.95th=[ 194], 00:23:57.065 | 99.99th=[ 203] 00:23:57.065 bw ( KiB/s): min=106496, max=230861, per=9.20%, avg=165101.05, stdev=33121.59, samples=20 00:23:57.065 iops : min= 416, max= 901, avg=644.80, stdev=129.23, samples=20 00:23:57.065 lat (msec) : 2=0.08%, 4=0.38%, 10=1.35%, 20=2.01%, 50=6.82% 00:23:57.065 lat (msec) : 100=41.84%, 250=47.51% 00:23:57.065 cpu : usr=1.57%, sys=2.24%, ctx=3317, majf=0, minf=1 00:23:57.065 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:23:57.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.065 issued rwts: total=0,6510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.065 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.065 job2: (groupid=0, jobs=1): err= 0: pid=3287547: Wed Jul 24 23:09:29 2024 00:23:57.065 write: IOPS=607, BW=152MiB/s (159MB/s)(1528MiB/10065msec); 0 zone resets 00:23:57.065 slat (usec): min=23, max=49441, avg=1511.19, stdev=3236.41 00:23:57.065 clat (msec): min=2, max=209, avg=103.88, stdev=42.01 00:23:57.065 lat (msec): min=4, max=209, avg=105.39, stdev=42.62 00:23:57.065 clat percentiles (msec): 00:23:57.065 | 1.00th=[ 13], 5.00th=[ 29], 10.00th=[ 53], 20.00th=[ 70], 00:23:57.065 | 30.00th=[ 77], 40.00th=[ 89], 50.00th=[ 101], 60.00th=[ 120], 00:23:57.065 | 70.00th=[ 132], 80.00th=[ 144], 90.00th=[ 157], 95.00th=[ 174], 00:23:57.065 | 99.00th=[ 194], 99.50th=[ 203], 99.90th=[ 209], 99.95th=[ 209], 00:23:57.065 | 99.99th=[ 209] 00:23:57.065 bw ( KiB/s): min=98304, max=227328, per=8.63%, avg=154803.20, stdev=44586.96, samples=20 00:23:57.065 iops : min= 384, max= 888, avg=604.70, stdev=174.17, samples=20 00:23:57.065 lat (msec) : 4=0.02%, 10=0.49%, 20=2.75%, 50=6.35%, 100=40.38% 00:23:57.065 lat (msec) : 250=50.02% 00:23:57.065 cpu : usr=1.91%, sys=2.12%, ctx=2293, majf=0, minf=1 00:23:57.065 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:57.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.065 issued rwts: total=0,6110,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.065 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.065 job3: (groupid=0, jobs=1): err= 0: pid=3287548: Wed Jul 24 23:09:29 2024 00:23:57.065 write: IOPS=694, BW=174MiB/s (182MB/s)(1754MiB/10100msec); 0 zone resets 00:23:57.065 slat (usec): min=24, max=37986, avg=1229.45, stdev=2886.23 00:23:57.065 clat (usec): min=1758, max=210451, avg=90833.39, stdev=46208.84 00:23:57.065 lat (usec): min=1827, max=210509, avg=92062.84, stdev=46871.75 00:23:57.065 clat percentiles (msec): 00:23:57.065 | 1.00th=[ 11], 5.00th=[ 29], 10.00th=[ 38], 20.00th=[ 41], 00:23:57.065 | 30.00th=[ 51], 40.00th=[ 71], 50.00th=[ 94], 60.00th=[ 104], 00:23:57.065 | 70.00th=[ 126], 80.00th=[ 136], 90.00th=[ 150], 95.00th=[ 167], 00:23:57.065 | 99.00th=[ 190], 99.50th=[ 192], 99.90th=[ 201], 99.95th=[ 205], 00:23:57.065 | 99.99th=[ 211] 00:23:57.065 bw ( KiB/s): min=102400, max=401920, per=9.92%, avg=177971.20, stdev=75290.74, samples=20 00:23:57.066 iops : min= 400, max= 1570, avg=695.20, stdev=294.10, samples=20 00:23:57.066 lat (msec) : 2=0.01%, 4=0.09%, 10=0.77%, 20=1.94%, 50=27.16% 00:23:57.066 lat (msec) : 100=28.50%, 250=41.54% 00:23:57.066 cpu : usr=1.73%, sys=2.38%, ctx=2842, majf=0, minf=1 00:23:57.066 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:57.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.066 issued rwts: total=0,7015,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.066 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.066 job4: (groupid=0, jobs=1): err= 0: pid=3287549: Wed Jul 24 23:09:29 2024 00:23:57.066 write: IOPS=648, BW=162MiB/s (170MB/s)(1637MiB/10091msec); 0 zone resets 00:23:57.066 slat (usec): min=33, max=20613, avg=1328.89, stdev=2648.90 00:23:57.066 clat (msec): min=5, max=188, avg=97.30, stdev=29.18 00:23:57.066 lat (msec): min=5, max=188, avg=98.63, stdev=29.59 00:23:57.066 clat percentiles (msec): 00:23:57.066 | 1.00th=[ 23], 5.00th=[ 45], 10.00th=[ 60], 20.00th=[ 72], 00:23:57.066 | 30.00th=[ 90], 40.00th=[ 99], 50.00th=[ 102], 60.00th=[ 104], 00:23:57.066 | 70.00th=[ 107], 80.00th=[ 114], 90.00th=[ 131], 95.00th=[ 153], 00:23:57.066 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 186], 99.95th=[ 186], 00:23:57.066 | 99.99th=[ 190] 00:23:57.066 bw ( KiB/s): min=116736, max=257024, per=9.25%, avg=165964.80, stdev=35152.12, samples=20 00:23:57.066 iops : min= 456, max= 1004, avg=648.30, stdev=137.31, samples=20 00:23:57.066 lat (msec) : 10=0.18%, 20=0.49%, 50=5.93%, 100=39.66%, 250=53.74% 00:23:57.066 cpu : usr=2.32%, sys=2.04%, ctx=2484, majf=0, minf=1 00:23:57.066 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:23:57.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.066 issued rwts: total=0,6546,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.066 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.066 job5: (groupid=0, jobs=1): err= 0: pid=3287550: Wed Jul 24 23:09:29 2024 00:23:57.066 write: IOPS=704, BW=176MiB/s (185MB/s)(1775MiB/10085msec); 0 zone resets 00:23:57.066 slat (usec): min=19, max=34693, avg=1270.04, stdev=2610.46 00:23:57.066 clat (usec): min=1644, max=186084, avg=89587.65, stdev=36820.81 00:23:57.066 lat (usec): min=1726, max=188574, avg=90857.70, stdev=37371.52 00:23:57.066 clat percentiles (msec): 00:23:57.066 | 1.00th=[ 10], 5.00th=[ 23], 10.00th=[ 40], 20.00th=[ 57], 00:23:57.066 | 30.00th=[ 72], 40.00th=[ 88], 50.00th=[ 96], 60.00th=[ 101], 00:23:57.066 | 70.00th=[ 105], 80.00th=[ 115], 90.00th=[ 138], 95.00th=[ 155], 00:23:57.066 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 182], 99.95th=[ 184], 00:23:57.066 | 99.99th=[ 186] 00:23:57.066 bw ( KiB/s): min=118784, max=343552, per=10.04%, avg=180172.80, stdev=60931.26, samples=20 00:23:57.066 iops : min= 464, max= 1342, avg=703.80, stdev=238.01, samples=20 00:23:57.066 lat (msec) : 2=0.03%, 4=0.07%, 10=1.13%, 20=3.06%, 50=14.05% 00:23:57.066 lat (msec) : 100=40.45%, 250=41.22% 00:23:57.066 cpu : usr=1.74%, sys=2.37%, ctx=2705, majf=0, minf=1 00:23:57.066 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:57.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.066 issued rwts: total=0,7101,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.066 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.066 job6: (groupid=0, jobs=1): err= 0: pid=3287551: Wed Jul 24 23:09:29 2024 00:23:57.066 write: IOPS=611, BW=153MiB/s (160MB/s)(1542MiB/10092msec); 0 zone resets 00:23:57.066 slat (usec): min=21, max=24342, avg=1301.46, stdev=2805.19 00:23:57.066 clat (usec): min=1971, max=195236, avg=103386.91, stdev=33936.03 00:23:57.066 lat (msec): min=2, max=195, avg=104.69, stdev=34.44 00:23:57.066 clat percentiles (msec): 00:23:57.066 | 1.00th=[ 14], 5.00th=[ 34], 10.00th=[ 53], 20.00th=[ 81], 00:23:57.066 | 30.00th=[ 94], 40.00th=[ 100], 50.00th=[ 104], 60.00th=[ 114], 00:23:57.066 | 70.00th=[ 127], 80.00th=[ 134], 90.00th=[ 144], 95.00th=[ 150], 00:23:57.066 | 99.00th=[ 163], 99.50th=[ 171], 99.90th=[ 184], 99.95th=[ 190], 00:23:57.066 | 99.99th=[ 197] 00:23:57.066 bw ( KiB/s): min=110592, max=251904, per=8.71%, avg=156262.40, stdev=35069.33, samples=20 00:23:57.066 iops : min= 432, max= 984, avg=610.40, stdev=136.99, samples=20 00:23:57.066 lat (msec) : 2=0.02%, 4=0.03%, 10=0.39%, 20=1.61%, 50=7.31% 00:23:57.066 lat (msec) : 100=33.65%, 250=57.00% 00:23:57.066 cpu : usr=1.50%, sys=2.11%, ctx=2808, majf=0, minf=1 00:23:57.066 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:57.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.066 issued rwts: total=0,6167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.066 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.066 job7: (groupid=0, jobs=1): err= 0: pid=3287552: Wed Jul 24 23:09:29 2024 00:23:57.066 write: IOPS=687, BW=172MiB/s (180MB/s)(1736MiB/10099msec); 0 zone resets 00:23:57.066 slat (usec): min=22, max=12974, avg=1206.08, stdev=2520.61 00:23:57.066 clat (msec): min=2, max=220, avg=91.84, stdev=31.80 00:23:57.066 lat (msec): min=2, max=220, avg=93.05, stdev=32.28 00:23:57.066 clat percentiles (msec): 00:23:57.066 | 1.00th=[ 17], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 68], 00:23:57.066 | 30.00th=[ 74], 40.00th=[ 89], 50.00th=[ 97], 60.00th=[ 101], 00:23:57.066 | 70.00th=[ 109], 80.00th=[ 121], 90.00th=[ 131], 95.00th=[ 140], 00:23:57.066 | 99.00th=[ 150], 99.50th=[ 157], 99.90th=[ 209], 99.95th=[ 215], 00:23:57.066 | 99.99th=[ 222] 00:23:57.066 bw ( KiB/s): min=118784, max=261632, per=9.82%, avg=176128.00, stdev=41055.89, samples=20 00:23:57.066 iops : min= 464, max= 1022, avg=688.00, stdev=160.37, samples=20 00:23:57.066 lat (msec) : 4=0.06%, 10=0.32%, 20=0.98%, 50=11.34%, 100=45.53% 00:23:57.066 lat (msec) : 250=41.78% 00:23:57.066 cpu : usr=1.71%, sys=2.17%, ctx=2899, majf=0, minf=1 00:23:57.066 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:57.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.066 issued rwts: total=0,6943,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.066 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.066 job8: (groupid=0, jobs=1): err= 0: pid=3287559: Wed Jul 24 23:09:29 2024 00:23:57.066 write: IOPS=673, BW=168MiB/s (177MB/s)(1697MiB/10070msec); 0 zone resets 00:23:57.066 slat (usec): min=28, max=30479, avg=1147.60, stdev=2706.20 00:23:57.066 clat (msec): min=2, max=199, avg=93.79, stdev=41.45 00:23:57.066 lat (msec): min=2, max=199, avg=94.94, stdev=42.12 00:23:57.066 clat percentiles (msec): 00:23:57.066 | 1.00th=[ 12], 5.00th=[ 23], 10.00th=[ 35], 20.00th=[ 56], 00:23:57.066 | 30.00th=[ 71], 40.00th=[ 90], 50.00th=[ 97], 60.00th=[ 103], 00:23:57.066 | 70.00th=[ 116], 80.00th=[ 132], 90.00th=[ 148], 95.00th=[ 161], 00:23:57.066 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 199], 99.95th=[ 199], 00:23:57.066 | 99.99th=[ 201] 00:23:57.066 bw ( KiB/s): min=96256, max=315904, per=9.59%, avg=172108.80, stdev=57904.34, samples=20 00:23:57.066 iops : min= 376, max= 1234, avg=672.30, stdev=226.19, samples=20 00:23:57.066 lat (msec) : 4=0.15%, 10=0.53%, 20=3.27%, 50=14.07%, 100=39.08% 00:23:57.066 lat (msec) : 250=42.90% 00:23:57.066 cpu : usr=1.49%, sys=2.37%, ctx=3358, majf=0, minf=1 00:23:57.066 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:57.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.066 issued rwts: total=0,6786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.066 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.066 job9: (groupid=0, jobs=1): err= 0: pid=3287560: Wed Jul 24 23:09:29 2024 00:23:57.066 write: IOPS=558, BW=140MiB/s (146MB/s)(1410MiB/10094msec); 0 zone resets 00:23:57.066 slat (usec): min=27, max=35114, avg=1668.23, stdev=3133.78 00:23:57.066 clat (msec): min=5, max=199, avg=112.83, stdev=29.49 00:23:57.066 lat (msec): min=5, max=199, avg=114.50, stdev=29.87 00:23:57.066 clat percentiles (msec): 00:23:57.066 | 1.00th=[ 16], 5.00th=[ 53], 10.00th=[ 78], 20.00th=[ 97], 00:23:57.066 | 30.00th=[ 103], 40.00th=[ 108], 50.00th=[ 118], 60.00th=[ 125], 00:23:57.066 | 70.00th=[ 130], 80.00th=[ 136], 90.00th=[ 144], 95.00th=[ 153], 00:23:57.066 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 199], 99.95th=[ 199], 00:23:57.066 | 99.99th=[ 199] 00:23:57.066 bw ( KiB/s): min=112640, max=211456, per=7.96%, avg=142771.20, stdev=26856.33, samples=20 00:23:57.066 iops : min= 440, max= 826, avg=557.70, stdev=104.91, samples=20 00:23:57.066 lat (msec) : 10=0.20%, 20=1.33%, 50=3.23%, 100=21.54%, 250=73.71% 00:23:57.066 cpu : usr=1.85%, sys=1.76%, ctx=1881, majf=0, minf=1 00:23:57.066 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:23:57.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.066 issued rwts: total=0,5640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.066 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.067 job10: (groupid=0, jobs=1): err= 0: pid=3287561: Wed Jul 24 23:09:29 2024 00:23:57.067 write: IOPS=536, BW=134MiB/s (141MB/s)(1349MiB/10063msec); 0 zone resets 00:23:57.067 slat (usec): min=29, max=47900, avg=1776.85, stdev=3408.72 00:23:57.067 clat (msec): min=23, max=215, avg=117.54, stdev=33.12 00:23:57.067 lat (msec): min=23, max=215, avg=119.32, stdev=33.50 00:23:57.067 clat percentiles (msec): 00:23:57.067 | 1.00th=[ 64], 5.00th=[ 68], 10.00th=[ 71], 20.00th=[ 80], 00:23:57.067 | 30.00th=[ 97], 40.00th=[ 114], 50.00th=[ 125], 60.00th=[ 131], 00:23:57.067 | 70.00th=[ 138], 80.00th=[ 146], 90.00th=[ 155], 95.00th=[ 167], 00:23:57.067 | 99.00th=[ 201], 99.50th=[ 209], 99.90th=[ 213], 99.95th=[ 215], 00:23:57.067 | 99.99th=[ 215] 00:23:57.067 bw ( KiB/s): min=94208, max=227328, per=7.61%, avg=136521.70, stdev=36362.69, samples=20 00:23:57.067 iops : min= 368, max= 888, avg=533.20, stdev=142.11, samples=20 00:23:57.067 lat (msec) : 50=0.48%, 100=30.87%, 250=68.64% 00:23:57.067 cpu : usr=1.73%, sys=1.82%, ctx=1590, majf=0, minf=1 00:23:57.067 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:57.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.067 issued rwts: total=0,5396,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.067 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.067 00:23:57.067 Run status group 0 (all jobs): 00:23:57.067 WRITE: bw=1752MiB/s (1837MB/s), 134MiB/s-176MiB/s (141MB/s-185MB/s), io=17.3GiB (18.6GB), run=10063-10102msec 00:23:57.067 00:23:57.067 Disk stats (read/write): 00:23:57.067 nvme0n1: ios=49/13077, merge=0/0, ticks=1934/1221881, in_queue=1223815, util=99.67% 00:23:57.067 nvme10n1: ios=46/12906, merge=0/0, ticks=2057/1231518, in_queue=1233575, util=99.96% 00:23:57.067 nvme1n1: ios=0/12123, merge=0/0, ticks=0/1223832, in_queue=1223832, util=96.50% 00:23:57.067 nvme2n1: ios=48/13920, merge=0/0, ticks=1292/1220201, in_queue=1221493, util=100.00% 00:23:57.067 nvme3n1: ios=5/12992, merge=0/0, ticks=210/1224943, in_queue=1225153, util=96.91% 00:23:57.067 nvme4n1: ios=44/14101, merge=0/0, ticks=257/1224099, in_queue=1224356, util=99.61% 00:23:57.067 nvme5n1: ios=0/12234, merge=0/0, ticks=0/1228305, in_queue=1228305, util=97.66% 00:23:57.067 nvme6n1: ios=0/13780, merge=0/0, ticks=0/1224704, in_queue=1224704, util=97.88% 00:23:57.067 nvme7n1: ios=0/13477, merge=0/0, ticks=0/1230366, in_queue=1230366, util=98.51% 00:23:57.067 nvme8n1: ios=0/11176, merge=0/0, ticks=0/1222023, in_queue=1222023, util=98.82% 00:23:57.067 nvme9n1: ios=0/10678, merge=0/0, ticks=0/1221226, in_queue=1221226, util=98.96% 00:23:57.067 23:09:29 -- target/multiconnection.sh@36 -- # sync 00:23:57.067 23:09:29 -- target/multiconnection.sh@37 -- # seq 1 11 00:23:57.067 23:09:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:57.067 23:09:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:57.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:57.067 23:09:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:23:57.067 23:09:29 -- common/autotest_common.sh@1198 -- # local i=0 00:23:57.067 23:09:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:57.067 23:09:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:23:57.067 23:09:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:57.067 23:09:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:23:57.067 23:09:29 -- common/autotest_common.sh@1210 -- # return 0 00:23:57.067 23:09:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:57.067 23:09:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:57.067 23:09:29 -- common/autotest_common.sh@10 -- # set +x 00:23:57.067 23:09:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:57.067 23:09:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:57.067 23:09:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:23:57.636 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:23:57.636 23:09:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:23:57.636 23:09:29 -- common/autotest_common.sh@1198 -- # local i=0 00:23:57.636 23:09:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:57.636 23:09:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:23:57.637 23:09:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:23:57.637 23:09:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:57.637 23:09:29 -- common/autotest_common.sh@1210 -- # return 0 00:23:57.637 23:09:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:57.637 23:09:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:57.637 23:09:29 -- common/autotest_common.sh@10 -- # set +x 00:23:57.637 23:09:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:57.637 23:09:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:57.637 23:09:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:23:57.896 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:23:57.896 23:09:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:23:57.896 23:09:30 -- common/autotest_common.sh@1198 -- # local i=0 00:23:57.896 23:09:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:57.896 23:09:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:23:57.896 23:09:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:23:57.896 23:09:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:57.896 23:09:30 -- common/autotest_common.sh@1210 -- # return 0 00:23:57.896 23:09:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:57.896 23:09:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:57.896 23:09:30 -- common/autotest_common.sh@10 -- # set +x 00:23:57.896 23:09:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:57.896 23:09:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:57.896 23:09:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:23:58.155 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:23:58.155 23:09:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:23:58.155 23:09:30 -- common/autotest_common.sh@1198 -- # local i=0 00:23:58.155 23:09:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:58.155 23:09:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:23:58.155 23:09:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:58.155 23:09:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:23:58.155 23:09:30 -- common/autotest_common.sh@1210 -- # return 0 00:23:58.155 23:09:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:58.155 23:09:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:58.155 23:09:30 -- common/autotest_common.sh@10 -- # set +x 00:23:58.155 23:09:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:58.155 23:09:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:58.155 23:09:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:23:58.414 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:23:58.414 23:09:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:23:58.414 23:09:30 -- common/autotest_common.sh@1198 -- # local i=0 00:23:58.414 23:09:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:58.414 23:09:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:23:58.414 23:09:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:58.414 23:09:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:23:58.414 23:09:30 -- common/autotest_common.sh@1210 -- # return 0 00:23:58.414 23:09:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:23:58.414 23:09:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:58.414 23:09:30 -- common/autotest_common.sh@10 -- # set +x 00:23:58.414 23:09:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:58.414 23:09:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:58.414 23:09:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:23:58.677 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:23:58.677 23:09:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:23:58.677 23:09:31 -- common/autotest_common.sh@1198 -- # local i=0 00:23:58.677 23:09:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:58.677 23:09:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:23:58.677 23:09:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:58.677 23:09:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:23:58.940 23:09:31 -- common/autotest_common.sh@1210 -- # return 0 00:23:58.940 23:09:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:23:58.940 23:09:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:58.940 23:09:31 -- common/autotest_common.sh@10 -- # set +x 00:23:58.940 23:09:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:58.940 23:09:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:58.940 23:09:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:58.940 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:58.940 23:09:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:58.940 23:09:31 -- common/autotest_common.sh@1198 -- # local i=0 00:23:58.940 23:09:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:58.940 23:09:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:23:58.940 23:09:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:58.940 23:09:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:23:58.940 23:09:31 -- common/autotest_common.sh@1210 -- # return 0 00:23:58.940 23:09:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:58.940 23:09:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:58.940 23:09:31 -- common/autotest_common.sh@10 -- # set +x 00:23:58.940 23:09:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:58.940 23:09:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:58.940 23:09:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:59.198 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:59.198 23:09:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:59.198 23:09:31 -- common/autotest_common.sh@1198 -- # local i=0 00:23:59.198 23:09:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:59.198 23:09:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:23:59.198 23:09:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:23:59.198 23:09:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:59.198 23:09:31 -- common/autotest_common.sh@1210 -- # return 0 00:23:59.198 23:09:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:59.198 23:09:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:59.198 23:09:31 -- common/autotest_common.sh@10 -- # set +x 00:23:59.198 23:09:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:59.198 23:09:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:59.198 23:09:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:59.198 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:59.198 23:09:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:59.198 23:09:31 -- common/autotest_common.sh@1198 -- # local i=0 00:23:59.198 23:09:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:59.198 23:09:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:23:59.198 23:09:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:59.198 23:09:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:23:59.198 23:09:31 -- common/autotest_common.sh@1210 -- # return 0 00:23:59.198 23:09:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:59.198 23:09:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:59.198 23:09:31 -- common/autotest_common.sh@10 -- # set +x 00:23:59.198 23:09:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:59.198 23:09:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:59.198 23:09:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:59.456 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:59.456 23:09:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:59.456 23:09:31 -- common/autotest_common.sh@1198 -- # local i=0 00:23:59.456 23:09:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:59.456 23:09:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:23:59.456 23:09:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:59.456 23:09:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:23:59.456 23:09:31 -- common/autotest_common.sh@1210 -- # return 0 00:23:59.456 23:09:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:59.456 23:09:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:59.456 23:09:31 -- common/autotest_common.sh@10 -- # set +x 00:23:59.456 23:09:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:59.456 23:09:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:59.456 23:09:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:59.456 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:59.456 23:09:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:59.456 23:09:31 -- common/autotest_common.sh@1198 -- # local i=0 00:23:59.456 23:09:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:59.456 23:09:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:23:59.456 23:09:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:23:59.456 23:09:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:59.456 23:09:31 -- common/autotest_common.sh@1210 -- # return 0 00:23:59.456 23:09:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:59.456 23:09:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:59.456 23:09:31 -- common/autotest_common.sh@10 -- # set +x 00:23:59.714 23:09:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:59.715 23:09:31 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:59.715 23:09:31 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:59.715 23:09:31 -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:59.715 23:09:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:59.715 23:09:31 -- nvmf/common.sh@116 -- # sync 00:23:59.715 23:09:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:59.715 23:09:31 -- nvmf/common.sh@119 -- # set +e 00:23:59.715 23:09:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:59.715 23:09:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:59.715 rmmod nvme_tcp 00:23:59.715 rmmod nvme_fabrics 00:23:59.715 rmmod nvme_keyring 00:23:59.715 23:09:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:59.715 23:09:31 -- nvmf/common.sh@123 -- # set -e 00:23:59.715 23:09:31 -- nvmf/common.sh@124 -- # return 0 00:23:59.715 23:09:31 -- nvmf/common.sh@477 -- # '[' -n 3278127 ']' 00:23:59.715 23:09:31 -- nvmf/common.sh@478 -- # killprocess 3278127 00:23:59.715 23:09:31 -- common/autotest_common.sh@926 -- # '[' -z 3278127 ']' 00:23:59.715 23:09:31 -- common/autotest_common.sh@930 -- # kill -0 3278127 00:23:59.715 23:09:31 -- common/autotest_common.sh@931 -- # uname 00:23:59.715 23:09:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:59.715 23:09:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3278127 00:23:59.715 23:09:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:59.715 23:09:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:59.715 23:09:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3278127' 00:23:59.715 killing process with pid 3278127 00:23:59.715 23:09:32 -- common/autotest_common.sh@945 -- # kill 3278127 00:23:59.715 23:09:32 -- common/autotest_common.sh@950 -- # wait 3278127 00:24:00.281 23:09:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:00.281 23:09:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:00.281 23:09:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:00.281 23:09:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:00.281 23:09:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:00.281 23:09:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.281 23:09:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.281 23:09:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.185 23:09:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:02.185 00:24:02.185 real 1m14.921s 00:24:02.185 user 4m28.141s 00:24:02.185 sys 0m28.959s 00:24:02.185 23:09:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.185 23:09:34 -- common/autotest_common.sh@10 -- # set +x 00:24:02.185 ************************************ 00:24:02.185 END TEST nvmf_multiconnection 00:24:02.185 ************************************ 00:24:02.185 23:09:34 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:02.185 23:09:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:02.185 23:09:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:02.185 23:09:34 -- common/autotest_common.sh@10 -- # set +x 00:24:02.185 ************************************ 00:24:02.185 START TEST nvmf_initiator_timeout 00:24:02.185 ************************************ 00:24:02.185 23:09:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:02.445 * Looking for test storage... 00:24:02.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:02.445 23:09:34 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.445 23:09:34 -- nvmf/common.sh@7 -- # uname -s 00:24:02.445 23:09:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.445 23:09:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.445 23:09:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.445 23:09:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.445 23:09:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.445 23:09:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.445 23:09:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.445 23:09:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.445 23:09:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.445 23:09:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.445 23:09:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:02.445 23:09:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:02.445 23:09:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.445 23:09:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.445 23:09:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.445 23:09:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:02.445 23:09:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.445 23:09:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.445 23:09:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.445 23:09:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.445 23:09:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.445 23:09:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.445 23:09:34 -- paths/export.sh@5 -- # export PATH 00:24:02.445 23:09:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.445 23:09:34 -- nvmf/common.sh@46 -- # : 0 00:24:02.445 23:09:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:02.445 23:09:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:02.445 23:09:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:02.445 23:09:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.445 23:09:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.445 23:09:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:02.446 23:09:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:02.446 23:09:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:02.446 23:09:34 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:02.446 23:09:34 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:02.446 23:09:34 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:02.446 23:09:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:02.446 23:09:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.446 23:09:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:02.446 23:09:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:02.446 23:09:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:02.446 23:09:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.446 23:09:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.446 23:09:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.446 23:09:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:02.446 23:09:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:02.446 23:09:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:02.446 23:09:34 -- common/autotest_common.sh@10 -- # set +x 00:24:09.019 23:09:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:09.019 23:09:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:09.019 23:09:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:09.019 23:09:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:09.019 23:09:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:09.019 23:09:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:09.019 23:09:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:09.019 23:09:41 -- nvmf/common.sh@294 -- # net_devs=() 00:24:09.019 23:09:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:09.019 23:09:41 -- nvmf/common.sh@295 -- # e810=() 00:24:09.019 23:09:41 -- nvmf/common.sh@295 -- # local -ga e810 00:24:09.019 23:09:41 -- nvmf/common.sh@296 -- # x722=() 00:24:09.019 23:09:41 -- nvmf/common.sh@296 -- # local -ga x722 00:24:09.019 23:09:41 -- nvmf/common.sh@297 -- # mlx=() 00:24:09.019 23:09:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:09.019 23:09:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.019 23:09:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.019 23:09:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.019 23:09:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.019 23:09:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.019 23:09:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.019 23:09:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.019 23:09:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.019 23:09:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.019 23:09:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.019 23:09:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.019 23:09:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:09.019 23:09:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:09.019 23:09:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:09.019 23:09:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:09.019 23:09:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:09.019 23:09:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:09.019 23:09:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:09.019 23:09:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:09.019 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:09.019 23:09:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:09.019 23:09:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:09.019 23:09:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.019 23:09:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.019 23:09:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:09.019 23:09:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:09.019 23:09:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:09.019 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:09.019 23:09:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:09.019 23:09:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:09.019 23:09:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.019 23:09:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.019 23:09:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:09.019 23:09:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:09.019 23:09:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:09.019 23:09:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:09.019 23:09:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:09.019 23:09:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.019 23:09:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:09.019 23:09:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.019 23:09:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:09.019 Found net devices under 0000:af:00.0: cvl_0_0 00:24:09.019 23:09:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.019 23:09:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:09.019 23:09:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.019 23:09:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:09.019 23:09:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.019 23:09:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:09.019 Found net devices under 0000:af:00.1: cvl_0_1 00:24:09.019 23:09:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.019 23:09:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:09.019 23:09:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:09.019 23:09:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:09.019 23:09:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:09.019 23:09:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:09.019 23:09:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.019 23:09:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.019 23:09:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.019 23:09:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:09.019 23:09:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.019 23:09:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.019 23:09:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:09.019 23:09:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.020 23:09:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.020 23:09:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:09.020 23:09:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:09.020 23:09:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.020 23:09:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.020 23:09:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.020 23:09:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.020 23:09:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:09.020 23:09:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.279 23:09:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:09.279 23:09:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:09.279 23:09:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:09.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:24:09.279 00:24:09.279 --- 10.0.0.2 ping statistics --- 00:24:09.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.279 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:24:09.279 23:09:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:09.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:24:09.279 00:24:09.279 --- 10.0.0.1 ping statistics --- 00:24:09.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.279 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:24:09.279 23:09:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.279 23:09:41 -- nvmf/common.sh@410 -- # return 0 00:24:09.279 23:09:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:09.279 23:09:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.279 23:09:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:09.279 23:09:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:09.279 23:09:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.279 23:09:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:09.279 23:09:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:09.279 23:09:41 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:09.279 23:09:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:09.279 23:09:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:09.279 23:09:41 -- common/autotest_common.sh@10 -- # set +x 00:24:09.279 23:09:41 -- nvmf/common.sh@469 -- # nvmfpid=3293159 00:24:09.279 23:09:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:09.279 23:09:41 -- nvmf/common.sh@470 -- # waitforlisten 3293159 00:24:09.279 23:09:41 -- common/autotest_common.sh@819 -- # '[' -z 3293159 ']' 00:24:09.279 23:09:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.279 23:09:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:09.279 23:09:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.279 23:09:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:09.279 23:09:41 -- common/autotest_common.sh@10 -- # set +x 00:24:09.279 [2024-07-24 23:09:41.617839] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:09.279 [2024-07-24 23:09:41.617887] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.279 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.279 [2024-07-24 23:09:41.692769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:09.539 [2024-07-24 23:09:41.729356] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:09.539 [2024-07-24 23:09:41.729473] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.539 [2024-07-24 23:09:41.729484] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.539 [2024-07-24 23:09:41.729494] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.539 [2024-07-24 23:09:41.729546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.539 [2024-07-24 23:09:41.729642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.539 [2024-07-24 23:09:41.729706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:09.539 [2024-07-24 23:09:41.729708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.106 23:09:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:10.106 23:09:42 -- common/autotest_common.sh@852 -- # return 0 00:24:10.106 23:09:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:10.106 23:09:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:10.106 23:09:42 -- common/autotest_common.sh@10 -- # set +x 00:24:10.106 23:09:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.106 23:09:42 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:10.106 23:09:42 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:10.106 23:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.106 23:09:42 -- common/autotest_common.sh@10 -- # set +x 00:24:10.106 Malloc0 00:24:10.106 23:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.106 23:09:42 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:10.106 23:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.107 23:09:42 -- common/autotest_common.sh@10 -- # set +x 00:24:10.107 Delay0 00:24:10.107 23:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.107 23:09:42 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:10.107 23:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.107 23:09:42 -- common/autotest_common.sh@10 -- # set +x 00:24:10.107 [2024-07-24 23:09:42.495434] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.107 23:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.107 23:09:42 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:10.107 23:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.107 23:09:42 -- common/autotest_common.sh@10 -- # set +x 00:24:10.107 23:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.107 23:09:42 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:10.107 23:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.107 23:09:42 -- common/autotest_common.sh@10 -- # set +x 00:24:10.107 23:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.107 23:09:42 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:10.107 23:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.107 23:09:42 -- common/autotest_common.sh@10 -- # set +x 00:24:10.107 [2024-07-24 23:09:42.523686] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.107 23:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.107 23:09:42 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:11.483 23:09:43 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:11.483 23:09:43 -- common/autotest_common.sh@1177 -- # local i=0 00:24:11.483 23:09:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:11.483 23:09:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:11.483 23:09:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:14.016 23:09:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:14.016 23:09:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:14.016 23:09:45 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:24:14.016 23:09:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:14.016 23:09:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:14.016 23:09:45 -- common/autotest_common.sh@1187 -- # return 0 00:24:14.016 23:09:45 -- target/initiator_timeout.sh@35 -- # fio_pid=3293988 00:24:14.016 23:09:45 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:14.016 23:09:45 -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:14.016 [global] 00:24:14.016 thread=1 00:24:14.016 invalidate=1 00:24:14.016 rw=write 00:24:14.016 time_based=1 00:24:14.016 runtime=60 00:24:14.016 ioengine=libaio 00:24:14.016 direct=1 00:24:14.016 bs=4096 00:24:14.016 iodepth=1 00:24:14.016 norandommap=0 00:24:14.016 numjobs=1 00:24:14.016 00:24:14.016 verify_dump=1 00:24:14.016 verify_backlog=512 00:24:14.016 verify_state_save=0 00:24:14.016 do_verify=1 00:24:14.016 verify=crc32c-intel 00:24:14.016 [job0] 00:24:14.016 filename=/dev/nvme0n1 00:24:14.016 Could not set queue depth (nvme0n1) 00:24:14.016 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:14.016 fio-3.35 00:24:14.016 Starting 1 thread 00:24:16.548 23:09:48 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:16.548 23:09:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.548 23:09:48 -- common/autotest_common.sh@10 -- # set +x 00:24:16.548 true 00:24:16.548 23:09:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.548 23:09:48 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:16.548 23:09:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.548 23:09:48 -- common/autotest_common.sh@10 -- # set +x 00:24:16.548 true 00:24:16.548 23:09:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.548 23:09:48 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:16.548 23:09:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.548 23:09:48 -- common/autotest_common.sh@10 -- # set +x 00:24:16.548 true 00:24:16.548 23:09:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.548 23:09:48 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:16.548 23:09:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.548 23:09:48 -- common/autotest_common.sh@10 -- # set +x 00:24:16.548 true 00:24:16.548 23:09:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.548 23:09:48 -- target/initiator_timeout.sh@45 -- # sleep 3 00:24:19.865 23:09:51 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:24:19.865 23:09:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.865 23:09:51 -- common/autotest_common.sh@10 -- # set +x 00:24:19.865 true 00:24:19.865 23:09:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.865 23:09:51 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:24:19.865 23:09:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.865 23:09:51 -- common/autotest_common.sh@10 -- # set +x 00:24:19.865 true 00:24:19.865 23:09:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.865 23:09:51 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:24:19.865 23:09:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.865 23:09:51 -- common/autotest_common.sh@10 -- # set +x 00:24:19.865 true 00:24:19.865 23:09:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.865 23:09:51 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:24:19.865 23:09:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.865 23:09:51 -- common/autotest_common.sh@10 -- # set +x 00:24:19.865 true 00:24:19.865 23:09:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.865 23:09:51 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:24:19.865 23:09:51 -- target/initiator_timeout.sh@54 -- # wait 3293988 00:25:16.091 00:25:16.091 job0: (groupid=0, jobs=1): err= 0: pid=3294124: Wed Jul 24 23:10:46 2024 00:25:16.091 read: IOPS=149, BW=599KiB/s (614kB/s)(35.1MiB/60007msec) 00:25:16.091 slat (nsec): min=8376, max=93872, avg=10087.67, stdev=3395.13 00:25:16.091 clat (usec): min=345, max=42267, avg=1714.94, stdev=7078.70 00:25:16.091 lat (usec): min=355, max=42279, avg=1725.03, stdev=7081.40 00:25:16.091 clat percentiles (usec): 00:25:16.091 | 1.00th=[ 375], 5.00th=[ 388], 10.00th=[ 396], 20.00th=[ 404], 00:25:16.091 | 30.00th=[ 412], 40.00th=[ 416], 50.00th=[ 424], 60.00th=[ 441], 00:25:16.091 | 70.00th=[ 494], 80.00th=[ 502], 90.00th=[ 519], 95.00th=[ 545], 00:25:16.091 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:25:16.091 | 99.99th=[42206] 00:25:16.091 write: IOPS=153, BW=614KiB/s (629kB/s)(36.0MiB/60007msec); 0 zone resets 00:25:16.091 slat (nsec): min=9176, max=67406, avg=12514.08, stdev=1481.75 00:25:16.091 clat (usec): min=199, max=41985k, avg=4811.71, stdev=437336.42 00:25:16.091 lat (usec): min=211, max=41985k, avg=4824.22, stdev=437336.41 00:25:16.091 clat percentiles (usec): 00:25:16.091 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 237], 00:25:16.091 | 20.00th=[ 241], 30.00th=[ 245], 40.00th=[ 249], 00:25:16.091 | 50.00th=[ 253], 60.00th=[ 258], 70.00th=[ 265], 00:25:16.091 | 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 289], 00:25:16.091 | 99.00th=[ 318], 99.50th=[ 334], 99.90th=[ 388], 00:25:16.091 | 99.95th=[ 465], 99.99th=[17112761] 00:25:16.091 bw ( KiB/s): min= 928, max= 8192, per=100.00%, avg=4608.00, stdev=1955.24, samples=16 00:25:16.091 iops : min= 232, max= 2048, avg=1152.00, stdev=488.81, samples=16 00:25:16.091 lat (usec) : 250=21.25%, 500=67.25%, 750=9.94%, 1000=0.01% 00:25:16.091 lat (msec) : 2=0.01%, 50=1.54%, >=2000=0.01% 00:25:16.091 cpu : usr=0.19%, sys=0.38%, ctx=18205, majf=0, minf=2 00:25:16.091 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:16.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.091 issued rwts: total=8988,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.091 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:16.091 00:25:16.091 Run status group 0 (all jobs): 00:25:16.091 READ: bw=599KiB/s (614kB/s), 599KiB/s-599KiB/s (614kB/s-614kB/s), io=35.1MiB (36.8MB), run=60007-60007msec 00:25:16.091 WRITE: bw=614KiB/s (629kB/s), 614KiB/s-614KiB/s (629kB/s-629kB/s), io=36.0MiB (37.7MB), run=60007-60007msec 00:25:16.091 00:25:16.091 Disk stats (read/write): 00:25:16.091 nvme0n1: ios=9084/9216, merge=0/0, ticks=16453/2304, in_queue=18757, util=99.68% 00:25:16.091 23:10:46 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:16.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:16.091 23:10:46 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:16.091 23:10:46 -- common/autotest_common.sh@1198 -- # local i=0 00:25:16.091 23:10:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:16.091 23:10:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:16.091 23:10:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:16.091 23:10:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:16.091 23:10:46 -- common/autotest_common.sh@1210 -- # return 0 00:25:16.091 23:10:46 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:16.091 23:10:46 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:16.091 nvmf hotplug test: fio successful as expected 00:25:16.091 23:10:46 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:16.091 23:10:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:16.092 23:10:46 -- common/autotest_common.sh@10 -- # set +x 00:25:16.092 23:10:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:16.092 23:10:46 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:16.092 23:10:46 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:16.092 23:10:46 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:16.092 23:10:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:16.092 23:10:46 -- nvmf/common.sh@116 -- # sync 00:25:16.092 23:10:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:16.092 23:10:46 -- nvmf/common.sh@119 -- # set +e 00:25:16.092 23:10:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:16.092 23:10:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:16.092 rmmod nvme_tcp 00:25:16.092 rmmod nvme_fabrics 00:25:16.092 rmmod nvme_keyring 00:25:16.092 23:10:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:16.092 23:10:46 -- nvmf/common.sh@123 -- # set -e 00:25:16.092 23:10:46 -- nvmf/common.sh@124 -- # return 0 00:25:16.092 23:10:46 -- nvmf/common.sh@477 -- # '[' -n 3293159 ']' 00:25:16.092 23:10:46 -- nvmf/common.sh@478 -- # killprocess 3293159 00:25:16.092 23:10:46 -- common/autotest_common.sh@926 -- # '[' -z 3293159 ']' 00:25:16.092 23:10:46 -- common/autotest_common.sh@930 -- # kill -0 3293159 00:25:16.092 23:10:46 -- common/autotest_common.sh@931 -- # uname 00:25:16.092 23:10:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:16.092 23:10:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3293159 00:25:16.092 23:10:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:16.092 23:10:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:16.092 23:10:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3293159' 00:25:16.092 killing process with pid 3293159 00:25:16.092 23:10:46 -- common/autotest_common.sh@945 -- # kill 3293159 00:25:16.092 23:10:46 -- common/autotest_common.sh@950 -- # wait 3293159 00:25:16.092 23:10:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:16.092 23:10:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:16.092 23:10:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:16.092 23:10:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:16.092 23:10:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:16.092 23:10:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.092 23:10:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:16.092 23:10:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.661 23:10:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:16.661 00:25:16.661 real 1m14.419s 00:25:16.661 user 4m27.145s 00:25:16.661 sys 0m9.209s 00:25:16.661 23:10:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:16.661 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:25:16.661 ************************************ 00:25:16.661 END TEST nvmf_initiator_timeout 00:25:16.661 ************************************ 00:25:16.661 23:10:49 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:25:16.661 23:10:49 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:25:16.661 23:10:49 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:25:16.661 23:10:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:16.661 23:10:49 -- common/autotest_common.sh@10 -- # set +x 00:25:23.236 23:10:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:23.236 23:10:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:23.236 23:10:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:23.236 23:10:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:23.236 23:10:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:23.236 23:10:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:23.236 23:10:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:23.236 23:10:54 -- nvmf/common.sh@294 -- # net_devs=() 00:25:23.236 23:10:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:23.236 23:10:54 -- nvmf/common.sh@295 -- # e810=() 00:25:23.236 23:10:54 -- nvmf/common.sh@295 -- # local -ga e810 00:25:23.236 23:10:54 -- nvmf/common.sh@296 -- # x722=() 00:25:23.236 23:10:54 -- nvmf/common.sh@296 -- # local -ga x722 00:25:23.236 23:10:54 -- nvmf/common.sh@297 -- # mlx=() 00:25:23.236 23:10:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:23.236 23:10:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.236 23:10:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.237 23:10:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.237 23:10:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.237 23:10:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.237 23:10:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.237 23:10:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.237 23:10:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.237 23:10:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.237 23:10:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.237 23:10:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.237 23:10:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:23.237 23:10:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:23.237 23:10:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:23.237 23:10:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:23.237 23:10:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:23.237 23:10:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:23.237 23:10:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:23.237 23:10:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:23.237 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:23.237 23:10:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:23.237 23:10:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:23.237 23:10:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.237 23:10:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.237 23:10:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:23.237 23:10:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:23.237 23:10:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:23.237 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:23.237 23:10:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:23.237 23:10:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:23.237 23:10:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.237 23:10:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.237 23:10:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:23.237 23:10:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:23.237 23:10:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:23.237 23:10:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:23.237 23:10:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:23.237 23:10:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.237 23:10:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:23.237 23:10:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.237 23:10:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:23.237 Found net devices under 0000:af:00.0: cvl_0_0 00:25:23.237 23:10:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.237 23:10:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:23.237 23:10:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.237 23:10:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:23.237 23:10:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.237 23:10:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:23.237 Found net devices under 0000:af:00.1: cvl_0_1 00:25:23.237 23:10:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.237 23:10:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:23.237 23:10:54 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:23.237 23:10:54 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:25:23.237 23:10:54 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:23.237 23:10:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:23.237 23:10:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:23.237 23:10:54 -- common/autotest_common.sh@10 -- # set +x 00:25:23.237 ************************************ 00:25:23.237 START TEST nvmf_perf_adq 00:25:23.237 ************************************ 00:25:23.237 23:10:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:23.237 * Looking for test storage... 00:25:23.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:23.237 23:10:55 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:23.237 23:10:55 -- nvmf/common.sh@7 -- # uname -s 00:25:23.237 23:10:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.237 23:10:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.237 23:10:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.237 23:10:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.237 23:10:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.237 23:10:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.237 23:10:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.237 23:10:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.237 23:10:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.237 23:10:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.237 23:10:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:23.237 23:10:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:23.237 23:10:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.237 23:10:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.237 23:10:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:23.237 23:10:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.237 23:10:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.237 23:10:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.237 23:10:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.237 23:10:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.237 23:10:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.237 23:10:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.237 23:10:55 -- paths/export.sh@5 -- # export PATH 00:25:23.237 23:10:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.237 23:10:55 -- nvmf/common.sh@46 -- # : 0 00:25:23.237 23:10:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:23.237 23:10:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:23.237 23:10:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:23.237 23:10:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.237 23:10:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.237 23:10:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:23.237 23:10:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:23.237 23:10:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:23.237 23:10:55 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:23.237 23:10:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:23.237 23:10:55 -- common/autotest_common.sh@10 -- # set +x 00:25:29.807 23:11:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:29.807 23:11:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:29.807 23:11:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:29.807 23:11:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:29.807 23:11:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:29.807 23:11:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:29.807 23:11:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:29.807 23:11:01 -- nvmf/common.sh@294 -- # net_devs=() 00:25:29.807 23:11:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:29.807 23:11:01 -- nvmf/common.sh@295 -- # e810=() 00:25:29.807 23:11:01 -- nvmf/common.sh@295 -- # local -ga e810 00:25:29.807 23:11:01 -- nvmf/common.sh@296 -- # x722=() 00:25:29.807 23:11:01 -- nvmf/common.sh@296 -- # local -ga x722 00:25:29.807 23:11:01 -- nvmf/common.sh@297 -- # mlx=() 00:25:29.807 23:11:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:29.807 23:11:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:29.807 23:11:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:29.807 23:11:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:29.807 23:11:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:29.807 23:11:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:29.807 23:11:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:29.807 23:11:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:29.807 23:11:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:29.807 23:11:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:29.807 23:11:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:29.807 23:11:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:29.808 23:11:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:29.808 23:11:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:29.808 23:11:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:29.808 23:11:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:29.808 23:11:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:29.808 23:11:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:29.808 23:11:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:29.808 23:11:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:29.808 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:29.808 23:11:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:29.808 23:11:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:29.808 23:11:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.808 23:11:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.808 23:11:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:29.808 23:11:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:29.808 23:11:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:29.808 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:29.808 23:11:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:29.808 23:11:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:29.808 23:11:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.808 23:11:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.808 23:11:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:29.808 23:11:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:29.808 23:11:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:29.808 23:11:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:29.808 23:11:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:29.808 23:11:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.808 23:11:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:29.808 23:11:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.808 23:11:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:29.808 Found net devices under 0000:af:00.0: cvl_0_0 00:25:29.808 23:11:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.808 23:11:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:29.808 23:11:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.808 23:11:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:29.808 23:11:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.808 23:11:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:29.808 Found net devices under 0000:af:00.1: cvl_0_1 00:25:29.808 23:11:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.808 23:11:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:29.808 23:11:01 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:29.808 23:11:01 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:29.808 23:11:01 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:29.808 23:11:01 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:25:29.808 23:11:01 -- target/perf_adq.sh@52 -- # rmmod ice 00:25:30.376 23:11:02 -- target/perf_adq.sh@53 -- # modprobe ice 00:25:32.961 23:11:04 -- target/perf_adq.sh@54 -- # sleep 5 00:25:38.238 23:11:09 -- target/perf_adq.sh@67 -- # nvmftestinit 00:25:38.238 23:11:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:38.238 23:11:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.238 23:11:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:38.238 23:11:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:38.238 23:11:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:38.238 23:11:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.238 23:11:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:38.238 23:11:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.238 23:11:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:38.238 23:11:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:38.238 23:11:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:38.238 23:11:09 -- common/autotest_common.sh@10 -- # set +x 00:25:38.238 23:11:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:38.238 23:11:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:38.238 23:11:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:38.238 23:11:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:38.238 23:11:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:38.238 23:11:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:38.238 23:11:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:38.238 23:11:09 -- nvmf/common.sh@294 -- # net_devs=() 00:25:38.238 23:11:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:38.238 23:11:09 -- nvmf/common.sh@295 -- # e810=() 00:25:38.238 23:11:09 -- nvmf/common.sh@295 -- # local -ga e810 00:25:38.238 23:11:09 -- nvmf/common.sh@296 -- # x722=() 00:25:38.238 23:11:09 -- nvmf/common.sh@296 -- # local -ga x722 00:25:38.238 23:11:09 -- nvmf/common.sh@297 -- # mlx=() 00:25:38.238 23:11:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:38.238 23:11:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.238 23:11:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.238 23:11:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.238 23:11:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.238 23:11:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.238 23:11:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.238 23:11:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.238 23:11:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.238 23:11:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.238 23:11:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.238 23:11:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.238 23:11:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:38.238 23:11:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:38.238 23:11:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:38.238 23:11:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:38.238 23:11:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:38.238 23:11:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:38.238 23:11:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:38.238 23:11:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:38.238 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:38.238 23:11:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:38.238 23:11:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:38.238 23:11:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.238 23:11:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.238 23:11:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:38.238 23:11:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:38.238 23:11:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:38.238 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:38.238 23:11:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:38.238 23:11:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:38.238 23:11:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.238 23:11:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.238 23:11:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:38.238 23:11:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:38.238 23:11:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:38.238 23:11:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:38.238 23:11:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:38.238 23:11:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.238 23:11:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:38.239 23:11:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.239 23:11:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:38.239 Found net devices under 0000:af:00.0: cvl_0_0 00:25:38.239 23:11:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.239 23:11:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:38.239 23:11:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.239 23:11:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:38.239 23:11:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.239 23:11:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:38.239 Found net devices under 0000:af:00.1: cvl_0_1 00:25:38.239 23:11:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.239 23:11:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:38.239 23:11:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:38.239 23:11:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:38.239 23:11:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:38.239 23:11:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:38.239 23:11:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.239 23:11:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.239 23:11:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.239 23:11:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:38.239 23:11:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.239 23:11:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.239 23:11:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:38.239 23:11:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.239 23:11:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.239 23:11:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:38.239 23:11:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:38.239 23:11:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.239 23:11:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.239 23:11:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.239 23:11:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.239 23:11:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:38.239 23:11:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.239 23:11:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.239 23:11:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.239 23:11:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:38.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:25:38.239 00:25:38.239 --- 10.0.0.2 ping statistics --- 00:25:38.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.239 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:25:38.239 23:11:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:25:38.239 00:25:38.239 --- 10.0.0.1 ping statistics --- 00:25:38.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.239 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:25:38.239 23:11:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.239 23:11:10 -- nvmf/common.sh@410 -- # return 0 00:25:38.239 23:11:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:38.239 23:11:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.239 23:11:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:38.239 23:11:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:38.239 23:11:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.239 23:11:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:38.239 23:11:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:38.239 23:11:10 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:38.239 23:11:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:38.239 23:11:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:38.239 23:11:10 -- common/autotest_common.sh@10 -- # set +x 00:25:38.239 23:11:10 -- nvmf/common.sh@469 -- # nvmfpid=3312468 00:25:38.239 23:11:10 -- nvmf/common.sh@470 -- # waitforlisten 3312468 00:25:38.239 23:11:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:38.239 23:11:10 -- common/autotest_common.sh@819 -- # '[' -z 3312468 ']' 00:25:38.239 23:11:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.239 23:11:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:38.239 23:11:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.239 23:11:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:38.239 23:11:10 -- common/autotest_common.sh@10 -- # set +x 00:25:38.239 [2024-07-24 23:11:10.199619] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:38.239 [2024-07-24 23:11:10.199667] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.239 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.239 [2024-07-24 23:11:10.276284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:38.239 [2024-07-24 23:11:10.315205] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:38.239 [2024-07-24 23:11:10.315343] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.239 [2024-07-24 23:11:10.315353] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.239 [2024-07-24 23:11:10.315363] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.239 [2024-07-24 23:11:10.315421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.239 [2024-07-24 23:11:10.315439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:38.239 [2024-07-24 23:11:10.315525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:38.239 [2024-07-24 23:11:10.315526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.808 23:11:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:38.808 23:11:11 -- common/autotest_common.sh@852 -- # return 0 00:25:38.808 23:11:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:38.808 23:11:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:38.808 23:11:11 -- common/autotest_common.sh@10 -- # set +x 00:25:38.808 23:11:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.808 23:11:11 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:25:38.808 23:11:11 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:25:38.808 23:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.808 23:11:11 -- common/autotest_common.sh@10 -- # set +x 00:25:38.808 23:11:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.808 23:11:11 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:25:38.808 23:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.808 23:11:11 -- common/autotest_common.sh@10 -- # set +x 00:25:38.808 23:11:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.808 23:11:11 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:25:38.808 23:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.808 23:11:11 -- common/autotest_common.sh@10 -- # set +x 00:25:38.808 [2024-07-24 23:11:11.151440] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.808 23:11:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.808 23:11:11 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:38.808 23:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.808 23:11:11 -- common/autotest_common.sh@10 -- # set +x 00:25:38.808 Malloc1 00:25:38.808 23:11:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.808 23:11:11 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:38.808 23:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.808 23:11:11 -- common/autotest_common.sh@10 -- # set +x 00:25:38.808 23:11:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.808 23:11:11 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:38.808 23:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.808 23:11:11 -- common/autotest_common.sh@10 -- # set +x 00:25:38.808 23:11:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.808 23:11:11 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:38.808 23:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.808 23:11:11 -- common/autotest_common.sh@10 -- # set +x 00:25:38.808 [2024-07-24 23:11:11.198021] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.808 23:11:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.808 23:11:11 -- target/perf_adq.sh@73 -- # perfpid=3312685 00:25:38.808 23:11:11 -- target/perf_adq.sh@74 -- # sleep 2 00:25:38.808 23:11:11 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:38.808 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.343 23:11:13 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:25:41.343 23:11:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.343 23:11:13 -- common/autotest_common.sh@10 -- # set +x 00:25:41.343 23:11:13 -- target/perf_adq.sh@76 -- # wc -l 00:25:41.343 23:11:13 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:25:41.343 23:11:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.343 23:11:13 -- target/perf_adq.sh@76 -- # count=4 00:25:41.343 23:11:13 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:25:41.343 23:11:13 -- target/perf_adq.sh@81 -- # wait 3312685 00:25:49.463 [2024-07-24 23:11:21.305642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34fa0 is same with the state(5) to be set 00:25:49.463 [2024-07-24 23:11:21.305738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34fa0 is same with the state(5) to be set 00:25:49.463 [2024-07-24 23:11:21.305751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34fa0 is same with the state(5) to be set 00:25:49.463 [2024-07-24 23:11:21.305760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34fa0 is same with the state(5) to be set 00:25:49.463 Initializing NVMe Controllers 00:25:49.463 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:49.463 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:49.463 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:49.463 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:49.463 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:49.463 Initialization complete. Launching workers. 00:25:49.463 ======================================================== 00:25:49.463 Latency(us) 00:25:49.463 Device Information : IOPS MiB/s Average min max 00:25:49.463 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11394.20 44.51 5635.29 1171.00 46254.64 00:25:49.463 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11291.90 44.11 5667.87 1061.78 10271.59 00:25:49.463 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11367.30 44.40 5631.19 1047.31 10369.87 00:25:49.463 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11301.10 44.14 5662.95 1060.30 10050.83 00:25:49.463 ======================================================== 00:25:49.463 Total : 45354.50 177.17 5649.27 1047.31 46254.64 00:25:49.463 00:25:49.463 23:11:21 -- target/perf_adq.sh@82 -- # nvmftestfini 00:25:49.463 23:11:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:49.463 23:11:21 -- nvmf/common.sh@116 -- # sync 00:25:49.463 23:11:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:49.463 23:11:21 -- nvmf/common.sh@119 -- # set +e 00:25:49.463 23:11:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:49.463 23:11:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:49.463 rmmod nvme_tcp 00:25:49.463 rmmod nvme_fabrics 00:25:49.463 rmmod nvme_keyring 00:25:49.463 23:11:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:49.463 23:11:21 -- nvmf/common.sh@123 -- # set -e 00:25:49.463 23:11:21 -- nvmf/common.sh@124 -- # return 0 00:25:49.463 23:11:21 -- nvmf/common.sh@477 -- # '[' -n 3312468 ']' 00:25:49.463 23:11:21 -- nvmf/common.sh@478 -- # killprocess 3312468 00:25:49.463 23:11:21 -- common/autotest_common.sh@926 -- # '[' -z 3312468 ']' 00:25:49.463 23:11:21 -- common/autotest_common.sh@930 -- # kill -0 3312468 00:25:49.463 23:11:21 -- common/autotest_common.sh@931 -- # uname 00:25:49.463 23:11:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:49.463 23:11:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3312468 00:25:49.463 23:11:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:49.463 23:11:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:49.463 23:11:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3312468' 00:25:49.463 killing process with pid 3312468 00:25:49.463 23:11:21 -- common/autotest_common.sh@945 -- # kill 3312468 00:25:49.463 23:11:21 -- common/autotest_common.sh@950 -- # wait 3312468 00:25:49.463 23:11:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:49.463 23:11:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:49.463 23:11:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:49.463 23:11:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:49.463 23:11:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:49.463 23:11:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.463 23:11:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:49.463 23:11:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.370 23:11:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:51.370 23:11:23 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:25:51.370 23:11:23 -- target/perf_adq.sh@52 -- # rmmod ice 00:25:52.749 23:11:25 -- target/perf_adq.sh@53 -- # modprobe ice 00:25:55.280 23:11:27 -- target/perf_adq.sh@54 -- # sleep 5 00:26:00.557 23:11:32 -- target/perf_adq.sh@87 -- # nvmftestinit 00:26:00.557 23:11:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:00.557 23:11:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:00.557 23:11:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:00.557 23:11:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:00.557 23:11:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:00.557 23:11:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.557 23:11:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:00.557 23:11:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.557 23:11:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:00.557 23:11:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:00.557 23:11:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:00.557 23:11:32 -- common/autotest_common.sh@10 -- # set +x 00:26:00.557 23:11:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:00.557 23:11:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:00.557 23:11:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:00.557 23:11:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:00.557 23:11:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:00.557 23:11:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:00.557 23:11:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:00.557 23:11:32 -- nvmf/common.sh@294 -- # net_devs=() 00:26:00.557 23:11:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:00.557 23:11:32 -- nvmf/common.sh@295 -- # e810=() 00:26:00.557 23:11:32 -- nvmf/common.sh@295 -- # local -ga e810 00:26:00.557 23:11:32 -- nvmf/common.sh@296 -- # x722=() 00:26:00.557 23:11:32 -- nvmf/common.sh@296 -- # local -ga x722 00:26:00.557 23:11:32 -- nvmf/common.sh@297 -- # mlx=() 00:26:00.557 23:11:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:00.557 23:11:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:00.557 23:11:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:00.557 23:11:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:00.557 23:11:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:00.557 23:11:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:00.557 23:11:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:00.557 23:11:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:00.557 23:11:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:00.557 23:11:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:00.557 23:11:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:00.557 23:11:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:00.557 23:11:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:00.557 23:11:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:00.557 23:11:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:00.557 23:11:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:00.557 23:11:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:00.557 23:11:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:00.557 23:11:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:00.557 23:11:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:00.557 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:00.557 23:11:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:00.557 23:11:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:00.557 23:11:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.557 23:11:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.557 23:11:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:00.557 23:11:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:00.557 23:11:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:00.557 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:00.557 23:11:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:00.557 23:11:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:00.557 23:11:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.557 23:11:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.557 23:11:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:00.557 23:11:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:00.557 23:11:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:00.557 23:11:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:00.557 23:11:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:00.557 23:11:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.557 23:11:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:00.557 23:11:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.557 23:11:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:00.557 Found net devices under 0000:af:00.0: cvl_0_0 00:26:00.557 23:11:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.557 23:11:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:00.557 23:11:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.557 23:11:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:00.557 23:11:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.557 23:11:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:00.557 Found net devices under 0000:af:00.1: cvl_0_1 00:26:00.557 23:11:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.558 23:11:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:00.558 23:11:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:00.558 23:11:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:00.558 23:11:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:00.558 23:11:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:00.558 23:11:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:00.558 23:11:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:00.558 23:11:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:00.558 23:11:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:00.558 23:11:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:00.558 23:11:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:00.558 23:11:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:00.558 23:11:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:00.558 23:11:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.558 23:11:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:00.558 23:11:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:00.558 23:11:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:00.558 23:11:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:00.558 23:11:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:00.558 23:11:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:00.558 23:11:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:00.558 23:11:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:00.558 23:11:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:00.558 23:11:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:00.558 23:11:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:00.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:00.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:26:00.558 00:26:00.558 --- 10.0.0.2 ping statistics --- 00:26:00.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.558 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:26:00.558 23:11:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:00.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:00.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:26:00.558 00:26:00.558 --- 10.0.0.1 ping statistics --- 00:26:00.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.558 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:26:00.558 23:11:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:00.558 23:11:32 -- nvmf/common.sh@410 -- # return 0 00:26:00.558 23:11:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:00.558 23:11:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:00.558 23:11:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:00.558 23:11:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:00.558 23:11:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:00.558 23:11:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:00.558 23:11:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:00.558 23:11:32 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:26:00.558 23:11:32 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:00.558 23:11:32 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:00.558 23:11:32 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:00.558 net.core.busy_poll = 1 00:26:00.558 23:11:32 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:00.558 net.core.busy_read = 1 00:26:00.558 23:11:32 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:00.558 23:11:32 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:00.558 23:11:32 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:00.558 23:11:32 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:00.558 23:11:32 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:00.558 23:11:32 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:00.558 23:11:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:00.558 23:11:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:00.558 23:11:32 -- common/autotest_common.sh@10 -- # set +x 00:26:00.558 23:11:32 -- nvmf/common.sh@469 -- # nvmfpid=3316616 00:26:00.558 23:11:32 -- nvmf/common.sh@470 -- # waitforlisten 3316616 00:26:00.558 23:11:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:00.558 23:11:32 -- common/autotest_common.sh@819 -- # '[' -z 3316616 ']' 00:26:00.558 23:11:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.558 23:11:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:00.558 23:11:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.558 23:11:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:00.558 23:11:32 -- common/autotest_common.sh@10 -- # set +x 00:26:00.558 [2024-07-24 23:11:32.939113] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:00.558 [2024-07-24 23:11:32.939170] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.558 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.821 [2024-07-24 23:11:33.016475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:00.821 [2024-07-24 23:11:33.055432] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:00.821 [2024-07-24 23:11:33.055540] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.821 [2024-07-24 23:11:33.055550] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.821 [2024-07-24 23:11:33.055559] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.821 [2024-07-24 23:11:33.055603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.821 [2024-07-24 23:11:33.055699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:00.821 [2024-07-24 23:11:33.055760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:00.821 [2024-07-24 23:11:33.055763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.388 23:11:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:01.388 23:11:33 -- common/autotest_common.sh@852 -- # return 0 00:26:01.388 23:11:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:01.388 23:11:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:01.388 23:11:33 -- common/autotest_common.sh@10 -- # set +x 00:26:01.388 23:11:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.388 23:11:33 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:26:01.388 23:11:33 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:01.388 23:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.388 23:11:33 -- common/autotest_common.sh@10 -- # set +x 00:26:01.388 23:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.388 23:11:33 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:26:01.388 23:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.388 23:11:33 -- common/autotest_common.sh@10 -- # set +x 00:26:01.646 23:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.646 23:11:33 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:01.646 23:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.646 23:11:33 -- common/autotest_common.sh@10 -- # set +x 00:26:01.646 [2024-07-24 23:11:33.882393] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.646 23:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.646 23:11:33 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:01.646 23:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.646 23:11:33 -- common/autotest_common.sh@10 -- # set +x 00:26:01.646 Malloc1 00:26:01.646 23:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.646 23:11:33 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:01.646 23:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.646 23:11:33 -- common/autotest_common.sh@10 -- # set +x 00:26:01.646 23:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.646 23:11:33 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:01.646 23:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.646 23:11:33 -- common/autotest_common.sh@10 -- # set +x 00:26:01.646 23:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.646 23:11:33 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.647 23:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.647 23:11:33 -- common/autotest_common.sh@10 -- # set +x 00:26:01.647 [2024-07-24 23:11:33.928721] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.647 23:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.647 23:11:33 -- target/perf_adq.sh@94 -- # perfpid=3316903 00:26:01.647 23:11:33 -- target/perf_adq.sh@95 -- # sleep 2 00:26:01.647 23:11:33 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:01.647 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.550 23:11:35 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:26:03.550 23:11:35 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:03.550 23:11:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.550 23:11:35 -- common/autotest_common.sh@10 -- # set +x 00:26:03.550 23:11:35 -- target/perf_adq.sh@97 -- # wc -l 00:26:03.550 23:11:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.809 23:11:35 -- target/perf_adq.sh@97 -- # count=2 00:26:03.809 23:11:35 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:26:03.809 23:11:35 -- target/perf_adq.sh@103 -- # wait 3316903 00:26:11.930 Initializing NVMe Controllers 00:26:11.930 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:11.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:11.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:11.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:11.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:11.930 Initialization complete. Launching workers. 00:26:11.930 ======================================================== 00:26:11.930 Latency(us) 00:26:11.930 Device Information : IOPS MiB/s Average min max 00:26:11.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7936.00 31.00 8088.58 1631.94 51831.35 00:26:11.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9157.30 35.77 6988.55 1394.11 50726.44 00:26:11.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9672.40 37.78 6616.25 1155.86 50296.50 00:26:11.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7616.60 29.75 8418.25 1402.06 50906.55 00:26:11.930 ======================================================== 00:26:11.930 Total : 34382.29 134.31 7454.44 1155.86 51831.35 00:26:11.930 00:26:11.930 23:11:44 -- target/perf_adq.sh@104 -- # nvmftestfini 00:26:11.930 23:11:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:11.930 23:11:44 -- nvmf/common.sh@116 -- # sync 00:26:11.930 23:11:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:11.930 23:11:44 -- nvmf/common.sh@119 -- # set +e 00:26:11.930 23:11:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:11.930 23:11:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:11.930 rmmod nvme_tcp 00:26:11.930 rmmod nvme_fabrics 00:26:11.930 rmmod nvme_keyring 00:26:11.930 23:11:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:11.930 23:11:44 -- nvmf/common.sh@123 -- # set -e 00:26:11.930 23:11:44 -- nvmf/common.sh@124 -- # return 0 00:26:11.930 23:11:44 -- nvmf/common.sh@477 -- # '[' -n 3316616 ']' 00:26:11.930 23:11:44 -- nvmf/common.sh@478 -- # killprocess 3316616 00:26:11.930 23:11:44 -- common/autotest_common.sh@926 -- # '[' -z 3316616 ']' 00:26:11.930 23:11:44 -- common/autotest_common.sh@930 -- # kill -0 3316616 00:26:11.930 23:11:44 -- common/autotest_common.sh@931 -- # uname 00:26:11.930 23:11:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:11.930 23:11:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3316616 00:26:11.931 23:11:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:11.931 23:11:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:11.931 23:11:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3316616' 00:26:11.931 killing process with pid 3316616 00:26:11.931 23:11:44 -- common/autotest_common.sh@945 -- # kill 3316616 00:26:11.931 23:11:44 -- common/autotest_common.sh@950 -- # wait 3316616 00:26:12.189 23:11:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:12.189 23:11:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:12.189 23:11:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:12.189 23:11:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:12.189 23:11:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:12.189 23:11:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.189 23:11:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:12.190 23:11:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.482 23:11:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:15.482 23:11:47 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:26:15.482 00:26:15.482 real 0m52.509s 00:26:15.482 user 2m46.047s 00:26:15.482 sys 0m13.996s 00:26:15.482 23:11:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:15.482 23:11:47 -- common/autotest_common.sh@10 -- # set +x 00:26:15.482 ************************************ 00:26:15.482 END TEST nvmf_perf_adq 00:26:15.482 ************************************ 00:26:15.482 23:11:47 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:15.482 23:11:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:15.482 23:11:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:15.482 23:11:47 -- common/autotest_common.sh@10 -- # set +x 00:26:15.482 ************************************ 00:26:15.482 START TEST nvmf_shutdown 00:26:15.482 ************************************ 00:26:15.482 23:11:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:15.482 * Looking for test storage... 00:26:15.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:15.482 23:11:47 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:15.482 23:11:47 -- nvmf/common.sh@7 -- # uname -s 00:26:15.482 23:11:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.482 23:11:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.482 23:11:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.482 23:11:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.482 23:11:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.482 23:11:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.482 23:11:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.482 23:11:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.482 23:11:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.482 23:11:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.482 23:11:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:15.482 23:11:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:15.482 23:11:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.482 23:11:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.482 23:11:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:15.482 23:11:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:15.482 23:11:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.482 23:11:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.482 23:11:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.482 23:11:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.482 23:11:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.482 23:11:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.482 23:11:47 -- paths/export.sh@5 -- # export PATH 00:26:15.482 23:11:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.482 23:11:47 -- nvmf/common.sh@46 -- # : 0 00:26:15.482 23:11:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:15.482 23:11:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:15.482 23:11:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:15.482 23:11:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.482 23:11:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.483 23:11:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:15.483 23:11:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:15.483 23:11:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:15.483 23:11:47 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:15.483 23:11:47 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:15.483 23:11:47 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:15.483 23:11:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:15.483 23:11:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:15.483 23:11:47 -- common/autotest_common.sh@10 -- # set +x 00:26:15.483 ************************************ 00:26:15.483 START TEST nvmf_shutdown_tc1 00:26:15.483 ************************************ 00:26:15.483 23:11:47 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:26:15.483 23:11:47 -- target/shutdown.sh@74 -- # starttarget 00:26:15.483 23:11:47 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:15.483 23:11:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:15.483 23:11:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.483 23:11:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:15.483 23:11:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:15.483 23:11:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:15.483 23:11:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.483 23:11:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:15.483 23:11:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.483 23:11:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:15.483 23:11:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:15.483 23:11:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:15.483 23:11:47 -- common/autotest_common.sh@10 -- # set +x 00:26:22.057 23:11:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:22.057 23:11:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:22.057 23:11:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:22.057 23:11:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:22.057 23:11:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:22.057 23:11:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:22.057 23:11:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:22.057 23:11:54 -- nvmf/common.sh@294 -- # net_devs=() 00:26:22.057 23:11:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:22.057 23:11:54 -- nvmf/common.sh@295 -- # e810=() 00:26:22.057 23:11:54 -- nvmf/common.sh@295 -- # local -ga e810 00:26:22.057 23:11:54 -- nvmf/common.sh@296 -- # x722=() 00:26:22.057 23:11:54 -- nvmf/common.sh@296 -- # local -ga x722 00:26:22.057 23:11:54 -- nvmf/common.sh@297 -- # mlx=() 00:26:22.057 23:11:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:22.057 23:11:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:22.057 23:11:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:22.057 23:11:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:22.057 23:11:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:22.057 23:11:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:22.057 23:11:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:22.057 23:11:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:22.057 23:11:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:22.057 23:11:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:22.057 23:11:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:22.057 23:11:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:22.057 23:11:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:22.057 23:11:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:22.057 23:11:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:22.057 23:11:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:22.057 23:11:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:22.057 23:11:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:22.057 23:11:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:22.057 23:11:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:22.057 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:22.057 23:11:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:22.058 23:11:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:22.058 23:11:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.058 23:11:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.058 23:11:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:22.058 23:11:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:22.058 23:11:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:22.058 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:22.058 23:11:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:22.058 23:11:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:22.058 23:11:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.058 23:11:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.058 23:11:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:22.058 23:11:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:22.058 23:11:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:22.058 23:11:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:22.058 23:11:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:22.058 23:11:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.058 23:11:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:22.058 23:11:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.058 23:11:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:22.058 Found net devices under 0000:af:00.0: cvl_0_0 00:26:22.058 23:11:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.058 23:11:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:22.058 23:11:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.058 23:11:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:22.058 23:11:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.058 23:11:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:22.058 Found net devices under 0000:af:00.1: cvl_0_1 00:26:22.058 23:11:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.058 23:11:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:22.058 23:11:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:22.058 23:11:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:22.058 23:11:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:22.058 23:11:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:22.058 23:11:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:22.058 23:11:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:22.058 23:11:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:22.058 23:11:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:22.058 23:11:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:22.058 23:11:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:22.058 23:11:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:22.058 23:11:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:22.058 23:11:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:22.058 23:11:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:22.058 23:11:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:22.058 23:11:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:22.058 23:11:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:22.317 23:11:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:22.317 23:11:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:22.317 23:11:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:22.317 23:11:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:22.317 23:11:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:22.317 23:11:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:22.575 23:11:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:22.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:22.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:26:22.575 00:26:22.575 --- 10.0.0.2 ping statistics --- 00:26:22.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.575 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:26:22.575 23:11:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:22.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:22.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:26:22.575 00:26:22.575 --- 10.0.0.1 ping statistics --- 00:26:22.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.575 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:26:22.575 23:11:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:22.575 23:11:54 -- nvmf/common.sh@410 -- # return 0 00:26:22.575 23:11:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:22.575 23:11:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:22.575 23:11:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:22.575 23:11:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:22.575 23:11:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:22.575 23:11:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:22.575 23:11:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:22.575 23:11:54 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:22.575 23:11:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:22.575 23:11:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:22.575 23:11:54 -- common/autotest_common.sh@10 -- # set +x 00:26:22.575 23:11:54 -- nvmf/common.sh@469 -- # nvmfpid=3322567 00:26:22.575 23:11:54 -- nvmf/common.sh@470 -- # waitforlisten 3322567 00:26:22.575 23:11:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:22.575 23:11:54 -- common/autotest_common.sh@819 -- # '[' -z 3322567 ']' 00:26:22.575 23:11:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.575 23:11:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:22.575 23:11:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.575 23:11:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:22.575 23:11:54 -- common/autotest_common.sh@10 -- # set +x 00:26:22.575 [2024-07-24 23:11:54.858381] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:22.575 [2024-07-24 23:11:54.858427] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.575 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.575 [2024-07-24 23:11:54.933950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:22.575 [2024-07-24 23:11:54.970951] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:22.575 [2024-07-24 23:11:54.971076] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:22.575 [2024-07-24 23:11:54.971085] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:22.575 [2024-07-24 23:11:54.971094] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:22.576 [2024-07-24 23:11:54.971192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:22.576 [2024-07-24 23:11:54.971280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:22.576 [2024-07-24 23:11:54.971387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.576 [2024-07-24 23:11:54.971389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:23.512 23:11:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:23.512 23:11:55 -- common/autotest_common.sh@852 -- # return 0 00:26:23.512 23:11:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:23.512 23:11:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:23.512 23:11:55 -- common/autotest_common.sh@10 -- # set +x 00:26:23.512 23:11:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.512 23:11:55 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:23.512 23:11:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:23.512 23:11:55 -- common/autotest_common.sh@10 -- # set +x 00:26:23.512 [2024-07-24 23:11:55.699048] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:23.512 23:11:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:23.512 23:11:55 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:23.512 23:11:55 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:23.512 23:11:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:23.512 23:11:55 -- common/autotest_common.sh@10 -- # set +x 00:26:23.512 23:11:55 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:23.512 23:11:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.512 23:11:55 -- target/shutdown.sh@28 -- # cat 00:26:23.512 23:11:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.512 23:11:55 -- target/shutdown.sh@28 -- # cat 00:26:23.512 23:11:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.512 23:11:55 -- target/shutdown.sh@28 -- # cat 00:26:23.512 23:11:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.512 23:11:55 -- target/shutdown.sh@28 -- # cat 00:26:23.512 23:11:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.512 23:11:55 -- target/shutdown.sh@28 -- # cat 00:26:23.512 23:11:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.512 23:11:55 -- target/shutdown.sh@28 -- # cat 00:26:23.512 23:11:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.512 23:11:55 -- target/shutdown.sh@28 -- # cat 00:26:23.512 23:11:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.512 23:11:55 -- target/shutdown.sh@28 -- # cat 00:26:23.512 23:11:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.512 23:11:55 -- target/shutdown.sh@28 -- # cat 00:26:23.512 23:11:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.512 23:11:55 -- target/shutdown.sh@28 -- # cat 00:26:23.512 23:11:55 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:23.512 23:11:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:23.512 23:11:55 -- common/autotest_common.sh@10 -- # set +x 00:26:23.512 Malloc1 00:26:23.512 [2024-07-24 23:11:55.814029] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.512 Malloc2 00:26:23.512 Malloc3 00:26:23.512 Malloc4 00:26:23.771 Malloc5 00:26:23.771 Malloc6 00:26:23.771 Malloc7 00:26:23.771 Malloc8 00:26:23.771 Malloc9 00:26:23.771 Malloc10 00:26:24.030 23:11:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:24.030 23:11:56 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:24.030 23:11:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:24.030 23:11:56 -- common/autotest_common.sh@10 -- # set +x 00:26:24.030 23:11:56 -- target/shutdown.sh@78 -- # perfpid=3322882 00:26:24.030 23:11:56 -- target/shutdown.sh@79 -- # waitforlisten 3322882 /var/tmp/bdevperf.sock 00:26:24.030 23:11:56 -- common/autotest_common.sh@819 -- # '[' -z 3322882 ']' 00:26:24.030 23:11:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:24.030 23:11:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:24.030 23:11:56 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:24.030 23:11:56 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:24.030 23:11:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:24.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:24.030 23:11:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:24.030 23:11:56 -- nvmf/common.sh@520 -- # config=() 00:26:24.030 23:11:56 -- common/autotest_common.sh@10 -- # set +x 00:26:24.030 23:11:56 -- nvmf/common.sh@520 -- # local subsystem config 00:26:24.030 23:11:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:24.030 23:11:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:24.030 { 00:26:24.030 "params": { 00:26:24.030 "name": "Nvme$subsystem", 00:26:24.030 "trtype": "$TEST_TRANSPORT", 00:26:24.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.030 "adrfam": "ipv4", 00:26:24.030 "trsvcid": "$NVMF_PORT", 00:26:24.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.030 "hdgst": ${hdgst:-false}, 00:26:24.030 "ddgst": ${ddgst:-false} 00:26:24.030 }, 00:26:24.030 "method": "bdev_nvme_attach_controller" 00:26:24.030 } 00:26:24.030 EOF 00:26:24.030 )") 00:26:24.030 23:11:56 -- nvmf/common.sh@542 -- # cat 00:26:24.030 23:11:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:24.030 23:11:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:24.030 { 00:26:24.030 "params": { 00:26:24.030 "name": "Nvme$subsystem", 00:26:24.030 "trtype": "$TEST_TRANSPORT", 00:26:24.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.030 "adrfam": "ipv4", 00:26:24.030 "trsvcid": "$NVMF_PORT", 00:26:24.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.030 "hdgst": ${hdgst:-false}, 00:26:24.030 "ddgst": ${ddgst:-false} 00:26:24.030 }, 00:26:24.030 "method": "bdev_nvme_attach_controller" 00:26:24.030 } 00:26:24.030 EOF 00:26:24.030 )") 00:26:24.030 23:11:56 -- nvmf/common.sh@542 -- # cat 00:26:24.030 23:11:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:24.030 23:11:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:24.030 { 00:26:24.030 "params": { 00:26:24.030 "name": "Nvme$subsystem", 00:26:24.030 "trtype": "$TEST_TRANSPORT", 00:26:24.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.030 "adrfam": "ipv4", 00:26:24.030 "trsvcid": "$NVMF_PORT", 00:26:24.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.030 "hdgst": ${hdgst:-false}, 00:26:24.030 "ddgst": ${ddgst:-false} 00:26:24.030 }, 00:26:24.030 "method": "bdev_nvme_attach_controller" 00:26:24.030 } 00:26:24.030 EOF 00:26:24.030 )") 00:26:24.030 23:11:56 -- nvmf/common.sh@542 -- # cat 00:26:24.030 23:11:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:24.030 23:11:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:24.030 { 00:26:24.030 "params": { 00:26:24.030 "name": "Nvme$subsystem", 00:26:24.030 "trtype": "$TEST_TRANSPORT", 00:26:24.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.030 "adrfam": "ipv4", 00:26:24.030 "trsvcid": "$NVMF_PORT", 00:26:24.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.030 "hdgst": ${hdgst:-false}, 00:26:24.030 "ddgst": ${ddgst:-false} 00:26:24.030 }, 00:26:24.030 "method": "bdev_nvme_attach_controller" 00:26:24.030 } 00:26:24.030 EOF 00:26:24.030 )") 00:26:24.030 23:11:56 -- nvmf/common.sh@542 -- # cat 00:26:24.030 23:11:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:24.030 23:11:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:24.030 { 00:26:24.030 "params": { 00:26:24.030 "name": "Nvme$subsystem", 00:26:24.030 "trtype": "$TEST_TRANSPORT", 00:26:24.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.030 "adrfam": "ipv4", 00:26:24.030 "trsvcid": "$NVMF_PORT", 00:26:24.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.030 "hdgst": ${hdgst:-false}, 00:26:24.030 "ddgst": ${ddgst:-false} 00:26:24.030 }, 00:26:24.030 "method": "bdev_nvme_attach_controller" 00:26:24.030 } 00:26:24.030 EOF 00:26:24.030 )") 00:26:24.030 23:11:56 -- nvmf/common.sh@542 -- # cat 00:26:24.030 [2024-07-24 23:11:56.292557] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:24.030 [2024-07-24 23:11:56.292610] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:24.030 23:11:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:24.030 23:11:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:24.030 { 00:26:24.030 "params": { 00:26:24.030 "name": "Nvme$subsystem", 00:26:24.030 "trtype": "$TEST_TRANSPORT", 00:26:24.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.031 "adrfam": "ipv4", 00:26:24.031 "trsvcid": "$NVMF_PORT", 00:26:24.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.031 "hdgst": ${hdgst:-false}, 00:26:24.031 "ddgst": ${ddgst:-false} 00:26:24.031 }, 00:26:24.031 "method": "bdev_nvme_attach_controller" 00:26:24.031 } 00:26:24.031 EOF 00:26:24.031 )") 00:26:24.031 23:11:56 -- nvmf/common.sh@542 -- # cat 00:26:24.031 23:11:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:24.031 23:11:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:24.031 { 00:26:24.031 "params": { 00:26:24.031 "name": "Nvme$subsystem", 00:26:24.031 "trtype": "$TEST_TRANSPORT", 00:26:24.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.031 "adrfam": "ipv4", 00:26:24.031 "trsvcid": "$NVMF_PORT", 00:26:24.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.031 "hdgst": ${hdgst:-false}, 00:26:24.031 "ddgst": ${ddgst:-false} 00:26:24.031 }, 00:26:24.031 "method": "bdev_nvme_attach_controller" 00:26:24.031 } 00:26:24.031 EOF 00:26:24.031 )") 00:26:24.031 23:11:56 -- nvmf/common.sh@542 -- # cat 00:26:24.031 23:11:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:24.031 23:11:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:24.031 { 00:26:24.031 "params": { 00:26:24.031 "name": "Nvme$subsystem", 00:26:24.031 "trtype": "$TEST_TRANSPORT", 00:26:24.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.031 "adrfam": "ipv4", 00:26:24.031 "trsvcid": "$NVMF_PORT", 00:26:24.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.031 "hdgst": ${hdgst:-false}, 00:26:24.031 "ddgst": ${ddgst:-false} 00:26:24.031 }, 00:26:24.031 "method": "bdev_nvme_attach_controller" 00:26:24.031 } 00:26:24.031 EOF 00:26:24.031 )") 00:26:24.031 23:11:56 -- nvmf/common.sh@542 -- # cat 00:26:24.031 23:11:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:24.031 23:11:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:24.031 { 00:26:24.031 "params": { 00:26:24.031 "name": "Nvme$subsystem", 00:26:24.031 "trtype": "$TEST_TRANSPORT", 00:26:24.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.031 "adrfam": "ipv4", 00:26:24.031 "trsvcid": "$NVMF_PORT", 00:26:24.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.031 "hdgst": ${hdgst:-false}, 00:26:24.031 "ddgst": ${ddgst:-false} 00:26:24.031 }, 00:26:24.031 "method": "bdev_nvme_attach_controller" 00:26:24.031 } 00:26:24.031 EOF 00:26:24.031 )") 00:26:24.031 23:11:56 -- nvmf/common.sh@542 -- # cat 00:26:24.031 23:11:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:24.031 23:11:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:24.031 { 00:26:24.031 "params": { 00:26:24.031 "name": "Nvme$subsystem", 00:26:24.031 "trtype": "$TEST_TRANSPORT", 00:26:24.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.031 "adrfam": "ipv4", 00:26:24.031 "trsvcid": "$NVMF_PORT", 00:26:24.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.031 "hdgst": ${hdgst:-false}, 00:26:24.031 "ddgst": ${ddgst:-false} 00:26:24.031 }, 00:26:24.031 "method": "bdev_nvme_attach_controller" 00:26:24.031 } 00:26:24.031 EOF 00:26:24.031 )") 00:26:24.031 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.031 23:11:56 -- nvmf/common.sh@542 -- # cat 00:26:24.031 23:11:56 -- nvmf/common.sh@544 -- # jq . 00:26:24.031 23:11:56 -- nvmf/common.sh@545 -- # IFS=, 00:26:24.031 23:11:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:24.031 "params": { 00:26:24.031 "name": "Nvme1", 00:26:24.031 "trtype": "tcp", 00:26:24.031 "traddr": "10.0.0.2", 00:26:24.031 "adrfam": "ipv4", 00:26:24.031 "trsvcid": "4420", 00:26:24.031 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:24.031 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:24.031 "hdgst": false, 00:26:24.031 "ddgst": false 00:26:24.031 }, 00:26:24.031 "method": "bdev_nvme_attach_controller" 00:26:24.031 },{ 00:26:24.031 "params": { 00:26:24.031 "name": "Nvme2", 00:26:24.031 "trtype": "tcp", 00:26:24.031 "traddr": "10.0.0.2", 00:26:24.031 "adrfam": "ipv4", 00:26:24.031 "trsvcid": "4420", 00:26:24.031 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:24.031 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:24.031 "hdgst": false, 00:26:24.031 "ddgst": false 00:26:24.031 }, 00:26:24.031 "method": "bdev_nvme_attach_controller" 00:26:24.031 },{ 00:26:24.031 "params": { 00:26:24.031 "name": "Nvme3", 00:26:24.031 "trtype": "tcp", 00:26:24.031 "traddr": "10.0.0.2", 00:26:24.031 "adrfam": "ipv4", 00:26:24.031 "trsvcid": "4420", 00:26:24.031 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:24.031 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:24.031 "hdgst": false, 00:26:24.031 "ddgst": false 00:26:24.031 }, 00:26:24.031 "method": "bdev_nvme_attach_controller" 00:26:24.031 },{ 00:26:24.031 "params": { 00:26:24.031 "name": "Nvme4", 00:26:24.031 "trtype": "tcp", 00:26:24.031 "traddr": "10.0.0.2", 00:26:24.031 "adrfam": "ipv4", 00:26:24.031 "trsvcid": "4420", 00:26:24.031 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:24.031 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:24.031 "hdgst": false, 00:26:24.031 "ddgst": false 00:26:24.031 }, 00:26:24.031 "method": "bdev_nvme_attach_controller" 00:26:24.031 },{ 00:26:24.031 "params": { 00:26:24.031 "name": "Nvme5", 00:26:24.031 "trtype": "tcp", 00:26:24.031 "traddr": "10.0.0.2", 00:26:24.031 "adrfam": "ipv4", 00:26:24.031 "trsvcid": "4420", 00:26:24.031 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:24.031 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:24.031 "hdgst": false, 00:26:24.031 "ddgst": false 00:26:24.031 }, 00:26:24.031 "method": "bdev_nvme_attach_controller" 00:26:24.031 },{ 00:26:24.031 "params": { 00:26:24.031 "name": "Nvme6", 00:26:24.031 "trtype": "tcp", 00:26:24.031 "traddr": "10.0.0.2", 00:26:24.031 "adrfam": "ipv4", 00:26:24.031 "trsvcid": "4420", 00:26:24.031 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:24.031 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:24.031 "hdgst": false, 00:26:24.031 "ddgst": false 00:26:24.031 }, 00:26:24.031 "method": "bdev_nvme_attach_controller" 00:26:24.031 },{ 00:26:24.031 "params": { 00:26:24.031 "name": "Nvme7", 00:26:24.031 "trtype": "tcp", 00:26:24.031 "traddr": "10.0.0.2", 00:26:24.031 "adrfam": "ipv4", 00:26:24.031 "trsvcid": "4420", 00:26:24.031 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:24.031 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:24.031 "hdgst": false, 00:26:24.031 "ddgst": false 00:26:24.031 }, 00:26:24.031 "method": "bdev_nvme_attach_controller" 00:26:24.031 },{ 00:26:24.031 "params": { 00:26:24.031 "name": "Nvme8", 00:26:24.031 "trtype": "tcp", 00:26:24.031 "traddr": "10.0.0.2", 00:26:24.031 "adrfam": "ipv4", 00:26:24.031 "trsvcid": "4420", 00:26:24.031 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:24.031 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:24.031 "hdgst": false, 00:26:24.031 "ddgst": false 00:26:24.031 }, 00:26:24.031 "method": "bdev_nvme_attach_controller" 00:26:24.031 },{ 00:26:24.031 "params": { 00:26:24.031 "name": "Nvme9", 00:26:24.031 "trtype": "tcp", 00:26:24.031 "traddr": "10.0.0.2", 00:26:24.031 "adrfam": "ipv4", 00:26:24.031 "trsvcid": "4420", 00:26:24.031 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:24.031 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:24.031 "hdgst": false, 00:26:24.031 "ddgst": false 00:26:24.031 }, 00:26:24.031 "method": "bdev_nvme_attach_controller" 00:26:24.031 },{ 00:26:24.031 "params": { 00:26:24.031 "name": "Nvme10", 00:26:24.031 "trtype": "tcp", 00:26:24.031 "traddr": "10.0.0.2", 00:26:24.031 "adrfam": "ipv4", 00:26:24.031 "trsvcid": "4420", 00:26:24.031 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:24.031 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:24.031 "hdgst": false, 00:26:24.031 "ddgst": false 00:26:24.031 }, 00:26:24.031 "method": "bdev_nvme_attach_controller" 00:26:24.031 }' 00:26:24.031 [2024-07-24 23:11:56.366296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.031 [2024-07-24 23:11:56.401911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.569 23:11:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:26.569 23:11:58 -- common/autotest_common.sh@852 -- # return 0 00:26:26.569 23:11:58 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:26.569 23:11:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.569 23:11:58 -- common/autotest_common.sh@10 -- # set +x 00:26:26.569 23:11:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.569 23:11:58 -- target/shutdown.sh@83 -- # kill -9 3322882 00:26:26.569 23:11:58 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:26:26.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3322882 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:26.569 23:11:58 -- target/shutdown.sh@87 -- # sleep 1 00:26:27.181 23:11:59 -- target/shutdown.sh@88 -- # kill -0 3322567 00:26:27.181 23:11:59 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:27.181 23:11:59 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:27.181 23:11:59 -- nvmf/common.sh@520 -- # config=() 00:26:27.181 23:11:59 -- nvmf/common.sh@520 -- # local subsystem config 00:26:27.181 23:11:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.181 23:11:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.181 { 00:26:27.181 "params": { 00:26:27.181 "name": "Nvme$subsystem", 00:26:27.181 "trtype": "$TEST_TRANSPORT", 00:26:27.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.181 "adrfam": "ipv4", 00:26:27.181 "trsvcid": "$NVMF_PORT", 00:26:27.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.181 "hdgst": ${hdgst:-false}, 00:26:27.181 "ddgst": ${ddgst:-false} 00:26:27.181 }, 00:26:27.181 "method": "bdev_nvme_attach_controller" 00:26:27.181 } 00:26:27.181 EOF 00:26:27.181 )") 00:26:27.181 23:11:59 -- nvmf/common.sh@542 -- # cat 00:26:27.181 23:11:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.181 23:11:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.181 { 00:26:27.181 "params": { 00:26:27.181 "name": "Nvme$subsystem", 00:26:27.181 "trtype": "$TEST_TRANSPORT", 00:26:27.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.181 "adrfam": "ipv4", 00:26:27.181 "trsvcid": "$NVMF_PORT", 00:26:27.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.181 "hdgst": ${hdgst:-false}, 00:26:27.181 "ddgst": ${ddgst:-false} 00:26:27.181 }, 00:26:27.181 "method": "bdev_nvme_attach_controller" 00:26:27.181 } 00:26:27.181 EOF 00:26:27.181 )") 00:26:27.181 23:11:59 -- nvmf/common.sh@542 -- # cat 00:26:27.181 23:11:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.181 23:11:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.181 { 00:26:27.181 "params": { 00:26:27.181 "name": "Nvme$subsystem", 00:26:27.181 "trtype": "$TEST_TRANSPORT", 00:26:27.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.181 "adrfam": "ipv4", 00:26:27.181 "trsvcid": "$NVMF_PORT", 00:26:27.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.181 "hdgst": ${hdgst:-false}, 00:26:27.181 "ddgst": ${ddgst:-false} 00:26:27.181 }, 00:26:27.181 "method": "bdev_nvme_attach_controller" 00:26:27.181 } 00:26:27.181 EOF 00:26:27.181 )") 00:26:27.181 23:11:59 -- nvmf/common.sh@542 -- # cat 00:26:27.181 23:11:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.181 23:11:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.181 { 00:26:27.181 "params": { 00:26:27.181 "name": "Nvme$subsystem", 00:26:27.181 "trtype": "$TEST_TRANSPORT", 00:26:27.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.181 "adrfam": "ipv4", 00:26:27.181 "trsvcid": "$NVMF_PORT", 00:26:27.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.181 "hdgst": ${hdgst:-false}, 00:26:27.181 "ddgst": ${ddgst:-false} 00:26:27.181 }, 00:26:27.181 "method": "bdev_nvme_attach_controller" 00:26:27.181 } 00:26:27.181 EOF 00:26:27.181 )") 00:26:27.181 23:11:59 -- nvmf/common.sh@542 -- # cat 00:26:27.181 23:11:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.181 23:11:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.181 { 00:26:27.181 "params": { 00:26:27.181 "name": "Nvme$subsystem", 00:26:27.181 "trtype": "$TEST_TRANSPORT", 00:26:27.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.181 "adrfam": "ipv4", 00:26:27.181 "trsvcid": "$NVMF_PORT", 00:26:27.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.181 "hdgst": ${hdgst:-false}, 00:26:27.181 "ddgst": ${ddgst:-false} 00:26:27.181 }, 00:26:27.181 "method": "bdev_nvme_attach_controller" 00:26:27.181 } 00:26:27.181 EOF 00:26:27.181 )") 00:26:27.181 23:11:59 -- nvmf/common.sh@542 -- # cat 00:26:27.181 23:11:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.181 23:11:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.181 { 00:26:27.181 "params": { 00:26:27.181 "name": "Nvme$subsystem", 00:26:27.181 "trtype": "$TEST_TRANSPORT", 00:26:27.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.181 "adrfam": "ipv4", 00:26:27.181 "trsvcid": "$NVMF_PORT", 00:26:27.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.181 "hdgst": ${hdgst:-false}, 00:26:27.181 "ddgst": ${ddgst:-false} 00:26:27.181 }, 00:26:27.181 "method": "bdev_nvme_attach_controller" 00:26:27.181 } 00:26:27.181 EOF 00:26:27.181 )") 00:26:27.181 23:11:59 -- nvmf/common.sh@542 -- # cat 00:26:27.181 [2024-07-24 23:11:59.486234] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:27.181 [2024-07-24 23:11:59.486289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3323431 ] 00:26:27.181 23:11:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.181 23:11:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.181 { 00:26:27.181 "params": { 00:26:27.181 "name": "Nvme$subsystem", 00:26:27.181 "trtype": "$TEST_TRANSPORT", 00:26:27.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.181 "adrfam": "ipv4", 00:26:27.181 "trsvcid": "$NVMF_PORT", 00:26:27.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.181 "hdgst": ${hdgst:-false}, 00:26:27.181 "ddgst": ${ddgst:-false} 00:26:27.181 }, 00:26:27.181 "method": "bdev_nvme_attach_controller" 00:26:27.181 } 00:26:27.181 EOF 00:26:27.181 )") 00:26:27.181 23:11:59 -- nvmf/common.sh@542 -- # cat 00:26:27.181 23:11:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.181 23:11:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.181 { 00:26:27.181 "params": { 00:26:27.181 "name": "Nvme$subsystem", 00:26:27.181 "trtype": "$TEST_TRANSPORT", 00:26:27.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.181 "adrfam": "ipv4", 00:26:27.181 "trsvcid": "$NVMF_PORT", 00:26:27.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.181 "hdgst": ${hdgst:-false}, 00:26:27.181 "ddgst": ${ddgst:-false} 00:26:27.181 }, 00:26:27.181 "method": "bdev_nvme_attach_controller" 00:26:27.181 } 00:26:27.181 EOF 00:26:27.181 )") 00:26:27.181 23:11:59 -- nvmf/common.sh@542 -- # cat 00:26:27.181 23:11:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.181 23:11:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.181 { 00:26:27.181 "params": { 00:26:27.181 "name": "Nvme$subsystem", 00:26:27.181 "trtype": "$TEST_TRANSPORT", 00:26:27.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.181 "adrfam": "ipv4", 00:26:27.181 "trsvcid": "$NVMF_PORT", 00:26:27.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.181 "hdgst": ${hdgst:-false}, 00:26:27.181 "ddgst": ${ddgst:-false} 00:26:27.181 }, 00:26:27.181 "method": "bdev_nvme_attach_controller" 00:26:27.181 } 00:26:27.181 EOF 00:26:27.181 )") 00:26:27.182 23:11:59 -- nvmf/common.sh@542 -- # cat 00:26:27.182 23:11:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.182 23:11:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.182 { 00:26:27.182 "params": { 00:26:27.182 "name": "Nvme$subsystem", 00:26:27.182 "trtype": "$TEST_TRANSPORT", 00:26:27.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.182 "adrfam": "ipv4", 00:26:27.182 "trsvcid": "$NVMF_PORT", 00:26:27.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.182 "hdgst": ${hdgst:-false}, 00:26:27.182 "ddgst": ${ddgst:-false} 00:26:27.182 }, 00:26:27.182 "method": "bdev_nvme_attach_controller" 00:26:27.182 } 00:26:27.182 EOF 00:26:27.182 )") 00:26:27.182 23:11:59 -- nvmf/common.sh@542 -- # cat 00:26:27.182 23:11:59 -- nvmf/common.sh@544 -- # jq . 00:26:27.182 23:11:59 -- nvmf/common.sh@545 -- # IFS=, 00:26:27.182 23:11:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:27.182 "params": { 00:26:27.182 "name": "Nvme1", 00:26:27.182 "trtype": "tcp", 00:26:27.182 "traddr": "10.0.0.2", 00:26:27.182 "adrfam": "ipv4", 00:26:27.182 "trsvcid": "4420", 00:26:27.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:27.182 "hdgst": false, 00:26:27.182 "ddgst": false 00:26:27.182 }, 00:26:27.182 "method": "bdev_nvme_attach_controller" 00:26:27.182 },{ 00:26:27.182 "params": { 00:26:27.182 "name": "Nvme2", 00:26:27.182 "trtype": "tcp", 00:26:27.182 "traddr": "10.0.0.2", 00:26:27.182 "adrfam": "ipv4", 00:26:27.182 "trsvcid": "4420", 00:26:27.182 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:27.182 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:27.182 "hdgst": false, 00:26:27.182 "ddgst": false 00:26:27.182 }, 00:26:27.182 "method": "bdev_nvme_attach_controller" 00:26:27.182 },{ 00:26:27.182 "params": { 00:26:27.182 "name": "Nvme3", 00:26:27.182 "trtype": "tcp", 00:26:27.182 "traddr": "10.0.0.2", 00:26:27.182 "adrfam": "ipv4", 00:26:27.182 "trsvcid": "4420", 00:26:27.182 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:27.182 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:27.182 "hdgst": false, 00:26:27.182 "ddgst": false 00:26:27.182 }, 00:26:27.182 "method": "bdev_nvme_attach_controller" 00:26:27.182 },{ 00:26:27.182 "params": { 00:26:27.182 "name": "Nvme4", 00:26:27.182 "trtype": "tcp", 00:26:27.182 "traddr": "10.0.0.2", 00:26:27.182 "adrfam": "ipv4", 00:26:27.182 "trsvcid": "4420", 00:26:27.182 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:27.182 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:27.182 "hdgst": false, 00:26:27.182 "ddgst": false 00:26:27.182 }, 00:26:27.182 "method": "bdev_nvme_attach_controller" 00:26:27.182 },{ 00:26:27.182 "params": { 00:26:27.182 "name": "Nvme5", 00:26:27.182 "trtype": "tcp", 00:26:27.182 "traddr": "10.0.0.2", 00:26:27.182 "adrfam": "ipv4", 00:26:27.182 "trsvcid": "4420", 00:26:27.182 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:27.182 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:27.182 "hdgst": false, 00:26:27.182 "ddgst": false 00:26:27.182 }, 00:26:27.182 "method": "bdev_nvme_attach_controller" 00:26:27.182 },{ 00:26:27.182 "params": { 00:26:27.182 "name": "Nvme6", 00:26:27.182 "trtype": "tcp", 00:26:27.182 "traddr": "10.0.0.2", 00:26:27.182 "adrfam": "ipv4", 00:26:27.182 "trsvcid": "4420", 00:26:27.182 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:27.182 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:27.182 "hdgst": false, 00:26:27.182 "ddgst": false 00:26:27.182 }, 00:26:27.182 "method": "bdev_nvme_attach_controller" 00:26:27.182 },{ 00:26:27.182 "params": { 00:26:27.182 "name": "Nvme7", 00:26:27.182 "trtype": "tcp", 00:26:27.182 "traddr": "10.0.0.2", 00:26:27.182 "adrfam": "ipv4", 00:26:27.182 "trsvcid": "4420", 00:26:27.182 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:27.182 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:27.182 "hdgst": false, 00:26:27.182 "ddgst": false 00:26:27.182 }, 00:26:27.182 "method": "bdev_nvme_attach_controller" 00:26:27.182 },{ 00:26:27.182 "params": { 00:26:27.182 "name": "Nvme8", 00:26:27.182 "trtype": "tcp", 00:26:27.182 "traddr": "10.0.0.2", 00:26:27.182 "adrfam": "ipv4", 00:26:27.182 "trsvcid": "4420", 00:26:27.182 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:27.182 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:27.182 "hdgst": false, 00:26:27.182 "ddgst": false 00:26:27.182 }, 00:26:27.182 "method": "bdev_nvme_attach_controller" 00:26:27.182 },{ 00:26:27.182 "params": { 00:26:27.182 "name": "Nvme9", 00:26:27.182 "trtype": "tcp", 00:26:27.182 "traddr": "10.0.0.2", 00:26:27.182 "adrfam": "ipv4", 00:26:27.182 "trsvcid": "4420", 00:26:27.182 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:27.182 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:27.182 "hdgst": false, 00:26:27.182 "ddgst": false 00:26:27.182 }, 00:26:27.182 "method": "bdev_nvme_attach_controller" 00:26:27.182 },{ 00:26:27.182 "params": { 00:26:27.182 "name": "Nvme10", 00:26:27.182 "trtype": "tcp", 00:26:27.182 "traddr": "10.0.0.2", 00:26:27.182 "adrfam": "ipv4", 00:26:27.182 "trsvcid": "4420", 00:26:27.182 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:27.182 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:27.182 "hdgst": false, 00:26:27.182 "ddgst": false 00:26:27.182 }, 00:26:27.182 "method": "bdev_nvme_attach_controller" 00:26:27.182 }' 00:26:27.182 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.182 [2024-07-24 23:11:59.568250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.471 [2024-07-24 23:11:59.604185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.848 Running I/O for 1 seconds... 00:26:29.786 00:26:29.786 Latency(us) 00:26:29.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.786 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.786 Verification LBA range: start 0x0 length 0x400 00:26:29.786 Nvme1n1 : 1.05 494.81 30.93 0.00 0.00 126719.10 24326.96 104857.60 00:26:29.786 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.786 Verification LBA range: start 0x0 length 0x400 00:26:29.786 Nvme2n1 : 1.07 526.47 32.90 0.00 0.00 119499.39 9856.61 102760.45 00:26:29.786 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.786 Verification LBA range: start 0x0 length 0x400 00:26:29.786 Nvme3n1 : 1.11 508.63 31.79 0.00 0.00 118520.03 13002.34 94371.84 00:26:29.786 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.786 Verification LBA range: start 0x0 length 0x400 00:26:29.786 Nvme4n1 : 1.08 531.12 33.19 0.00 0.00 117021.63 12320.77 93532.98 00:26:29.786 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.786 Verification LBA range: start 0x0 length 0x400 00:26:29.786 Nvme5n1 : 1.11 509.95 31.87 0.00 0.00 117003.93 12635.34 97727.28 00:26:29.786 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.786 Verification LBA range: start 0x0 length 0x400 00:26:29.786 Nvme6n1 : 1.06 504.45 31.53 0.00 0.00 120409.40 4115.66 98985.57 00:26:29.786 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.786 Verification LBA range: start 0x0 length 0x400 00:26:29.786 Nvme7n1 : 1.11 434.14 27.13 0.00 0.00 135239.06 17930.65 120795.96 00:26:29.786 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.786 Verification LBA range: start 0x0 length 0x400 00:26:29.786 Nvme8n1 : 1.12 502.78 31.42 0.00 0.00 116785.62 7235.17 99405.00 00:26:29.786 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.786 Verification LBA range: start 0x0 length 0x400 00:26:29.786 Nvme9n1 : 1.08 448.11 28.01 0.00 0.00 134638.73 13054.77 124990.26 00:26:29.786 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.786 Verification LBA range: start 0x0 length 0x400 00:26:29.786 Nvme10n1 : 1.08 528.32 33.02 0.00 0.00 113913.60 7392.46 97307.85 00:26:29.786 =================================================================================================================== 00:26:29.786 Total : 4988.78 311.80 0.00 0.00 121533.67 4115.66 124990.26 00:26:30.045 23:12:02 -- target/shutdown.sh@93 -- # stoptarget 00:26:30.045 23:12:02 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:30.045 23:12:02 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:30.045 23:12:02 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:30.045 23:12:02 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:30.045 23:12:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:30.045 23:12:02 -- nvmf/common.sh@116 -- # sync 00:26:30.045 23:12:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:30.045 23:12:02 -- nvmf/common.sh@119 -- # set +e 00:26:30.045 23:12:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:30.045 23:12:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:30.045 rmmod nvme_tcp 00:26:30.045 rmmod nvme_fabrics 00:26:30.045 rmmod nvme_keyring 00:26:30.045 23:12:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:30.045 23:12:02 -- nvmf/common.sh@123 -- # set -e 00:26:30.045 23:12:02 -- nvmf/common.sh@124 -- # return 0 00:26:30.045 23:12:02 -- nvmf/common.sh@477 -- # '[' -n 3322567 ']' 00:26:30.045 23:12:02 -- nvmf/common.sh@478 -- # killprocess 3322567 00:26:30.045 23:12:02 -- common/autotest_common.sh@926 -- # '[' -z 3322567 ']' 00:26:30.045 23:12:02 -- common/autotest_common.sh@930 -- # kill -0 3322567 00:26:30.045 23:12:02 -- common/autotest_common.sh@931 -- # uname 00:26:30.045 23:12:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:30.045 23:12:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3322567 00:26:30.045 23:12:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:30.045 23:12:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:30.045 23:12:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3322567' 00:26:30.045 killing process with pid 3322567 00:26:30.045 23:12:02 -- common/autotest_common.sh@945 -- # kill 3322567 00:26:30.045 23:12:02 -- common/autotest_common.sh@950 -- # wait 3322567 00:26:30.613 23:12:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:30.613 23:12:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:30.613 23:12:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:30.613 23:12:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:30.613 23:12:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:30.613 23:12:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.613 23:12:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:30.613 23:12:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.516 23:12:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:32.517 00:26:32.517 real 0m17.123s 00:26:32.517 user 0m37.255s 00:26:32.517 sys 0m7.162s 00:26:32.517 23:12:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:32.517 23:12:04 -- common/autotest_common.sh@10 -- # set +x 00:26:32.517 ************************************ 00:26:32.517 END TEST nvmf_shutdown_tc1 00:26:32.517 ************************************ 00:26:32.517 23:12:04 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:32.517 23:12:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:32.517 23:12:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:32.517 23:12:04 -- common/autotest_common.sh@10 -- # set +x 00:26:32.517 ************************************ 00:26:32.517 START TEST nvmf_shutdown_tc2 00:26:32.517 ************************************ 00:26:32.517 23:12:04 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:26:32.517 23:12:04 -- target/shutdown.sh@98 -- # starttarget 00:26:32.517 23:12:04 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:32.517 23:12:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:32.517 23:12:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:32.517 23:12:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:32.517 23:12:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:32.517 23:12:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:32.517 23:12:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.517 23:12:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:32.517 23:12:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.517 23:12:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:32.517 23:12:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:32.517 23:12:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:32.517 23:12:04 -- common/autotest_common.sh@10 -- # set +x 00:26:32.517 23:12:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:32.517 23:12:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:32.517 23:12:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:32.517 23:12:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:32.517 23:12:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:32.517 23:12:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:32.517 23:12:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:32.517 23:12:04 -- nvmf/common.sh@294 -- # net_devs=() 00:26:32.517 23:12:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:32.517 23:12:04 -- nvmf/common.sh@295 -- # e810=() 00:26:32.517 23:12:04 -- nvmf/common.sh@295 -- # local -ga e810 00:26:32.517 23:12:04 -- nvmf/common.sh@296 -- # x722=() 00:26:32.517 23:12:04 -- nvmf/common.sh@296 -- # local -ga x722 00:26:32.517 23:12:04 -- nvmf/common.sh@297 -- # mlx=() 00:26:32.517 23:12:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:32.517 23:12:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:32.517 23:12:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:32.517 23:12:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:32.517 23:12:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:32.517 23:12:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:32.517 23:12:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:32.517 23:12:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:32.517 23:12:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:32.517 23:12:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:32.517 23:12:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:32.517 23:12:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:32.517 23:12:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:32.517 23:12:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:32.517 23:12:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:32.517 23:12:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:32.517 23:12:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:32.517 23:12:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:32.517 23:12:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:32.517 23:12:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:32.517 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:32.517 23:12:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:32.517 23:12:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:32.517 23:12:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.517 23:12:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.517 23:12:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:32.517 23:12:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:32.517 23:12:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:32.517 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:32.517 23:12:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:32.517 23:12:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:32.517 23:12:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.517 23:12:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.517 23:12:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:32.517 23:12:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:32.517 23:12:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:32.517 23:12:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:32.517 23:12:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:32.517 23:12:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.517 23:12:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:32.517 23:12:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.517 23:12:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:32.517 Found net devices under 0000:af:00.0: cvl_0_0 00:26:32.517 23:12:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.517 23:12:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:32.517 23:12:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.517 23:12:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:32.517 23:12:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.517 23:12:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:32.517 Found net devices under 0000:af:00.1: cvl_0_1 00:26:32.517 23:12:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.517 23:12:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:32.517 23:12:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:32.517 23:12:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:32.517 23:12:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:32.517 23:12:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:32.517 23:12:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:32.517 23:12:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:32.517 23:12:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:32.517 23:12:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:32.517 23:12:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:32.517 23:12:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:32.517 23:12:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:32.517 23:12:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:32.517 23:12:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:32.517 23:12:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:32.517 23:12:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:32.517 23:12:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:32.517 23:12:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:32.776 23:12:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:32.776 23:12:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:32.776 23:12:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:32.776 23:12:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:32.776 23:12:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:32.776 23:12:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:32.776 23:12:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:32.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:32.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:26:32.776 00:26:32.776 --- 10.0.0.2 ping statistics --- 00:26:32.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.776 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:26:32.776 23:12:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:33.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:33.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:26:33.034 00:26:33.034 --- 10.0.0.1 ping statistics --- 00:26:33.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.034 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:26:33.034 23:12:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:33.034 23:12:05 -- nvmf/common.sh@410 -- # return 0 00:26:33.034 23:12:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:33.034 23:12:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:33.034 23:12:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:33.034 23:12:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:33.034 23:12:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:33.034 23:12:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:33.034 23:12:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:33.034 23:12:05 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:33.034 23:12:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:33.034 23:12:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:33.034 23:12:05 -- common/autotest_common.sh@10 -- # set +x 00:26:33.034 23:12:05 -- nvmf/common.sh@469 -- # nvmfpid=3324697 00:26:33.034 23:12:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:33.034 23:12:05 -- nvmf/common.sh@470 -- # waitforlisten 3324697 00:26:33.034 23:12:05 -- common/autotest_common.sh@819 -- # '[' -z 3324697 ']' 00:26:33.034 23:12:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.034 23:12:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:33.034 23:12:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.034 23:12:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:33.034 23:12:05 -- common/autotest_common.sh@10 -- # set +x 00:26:33.034 [2024-07-24 23:12:05.311662] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:33.034 [2024-07-24 23:12:05.311722] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:33.034 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.034 [2024-07-24 23:12:05.387443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:33.034 [2024-07-24 23:12:05.427286] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:33.034 [2024-07-24 23:12:05.427395] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.034 [2024-07-24 23:12:05.427405] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.034 [2024-07-24 23:12:05.427414] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.034 [2024-07-24 23:12:05.427453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:33.034 [2024-07-24 23:12:05.429732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:33.034 [2024-07-24 23:12:05.429829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.034 [2024-07-24 23:12:05.429831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:33.968 23:12:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:33.968 23:12:06 -- common/autotest_common.sh@852 -- # return 0 00:26:33.968 23:12:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:33.968 23:12:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:33.968 23:12:06 -- common/autotest_common.sh@10 -- # set +x 00:26:33.968 23:12:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.968 23:12:06 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:33.968 23:12:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.968 23:12:06 -- common/autotest_common.sh@10 -- # set +x 00:26:33.968 [2024-07-24 23:12:06.162058] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.968 23:12:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.968 23:12:06 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:33.968 23:12:06 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:33.968 23:12:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:33.968 23:12:06 -- common/autotest_common.sh@10 -- # set +x 00:26:33.968 23:12:06 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:33.968 23:12:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:33.968 23:12:06 -- target/shutdown.sh@28 -- # cat 00:26:33.968 23:12:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:33.968 23:12:06 -- target/shutdown.sh@28 -- # cat 00:26:33.968 23:12:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:33.968 23:12:06 -- target/shutdown.sh@28 -- # cat 00:26:33.968 23:12:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:33.968 23:12:06 -- target/shutdown.sh@28 -- # cat 00:26:33.968 23:12:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:33.968 23:12:06 -- target/shutdown.sh@28 -- # cat 00:26:33.968 23:12:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:33.968 23:12:06 -- target/shutdown.sh@28 -- # cat 00:26:33.968 23:12:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:33.968 23:12:06 -- target/shutdown.sh@28 -- # cat 00:26:33.968 23:12:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:33.968 23:12:06 -- target/shutdown.sh@28 -- # cat 00:26:33.968 23:12:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:33.968 23:12:06 -- target/shutdown.sh@28 -- # cat 00:26:33.968 23:12:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:33.968 23:12:06 -- target/shutdown.sh@28 -- # cat 00:26:33.968 23:12:06 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:33.968 23:12:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.968 23:12:06 -- common/autotest_common.sh@10 -- # set +x 00:26:33.968 Malloc1 00:26:33.968 [2024-07-24 23:12:06.276940] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:33.968 Malloc2 00:26:33.968 Malloc3 00:26:33.968 Malloc4 00:26:34.227 Malloc5 00:26:34.227 Malloc6 00:26:34.227 Malloc7 00:26:34.227 Malloc8 00:26:34.227 Malloc9 00:26:34.227 Malloc10 00:26:34.487 23:12:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:34.487 23:12:06 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:34.487 23:12:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:34.487 23:12:06 -- common/autotest_common.sh@10 -- # set +x 00:26:34.487 23:12:06 -- target/shutdown.sh@102 -- # perfpid=3325227 00:26:34.487 23:12:06 -- target/shutdown.sh@103 -- # waitforlisten 3325227 /var/tmp/bdevperf.sock 00:26:34.487 23:12:06 -- common/autotest_common.sh@819 -- # '[' -z 3325227 ']' 00:26:34.487 23:12:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:34.487 23:12:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:34.487 23:12:06 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:34.487 23:12:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:34.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:34.487 23:12:06 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:34.487 23:12:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:34.487 23:12:06 -- common/autotest_common.sh@10 -- # set +x 00:26:34.487 23:12:06 -- nvmf/common.sh@520 -- # config=() 00:26:34.487 23:12:06 -- nvmf/common.sh@520 -- # local subsystem config 00:26:34.487 23:12:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:34.487 23:12:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:34.487 { 00:26:34.487 "params": { 00:26:34.487 "name": "Nvme$subsystem", 00:26:34.487 "trtype": "$TEST_TRANSPORT", 00:26:34.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.487 "adrfam": "ipv4", 00:26:34.487 "trsvcid": "$NVMF_PORT", 00:26:34.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.487 "hdgst": ${hdgst:-false}, 00:26:34.487 "ddgst": ${ddgst:-false} 00:26:34.487 }, 00:26:34.487 "method": "bdev_nvme_attach_controller" 00:26:34.487 } 00:26:34.487 EOF 00:26:34.487 )") 00:26:34.487 23:12:06 -- nvmf/common.sh@542 -- # cat 00:26:34.487 23:12:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:34.487 23:12:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:34.487 { 00:26:34.487 "params": { 00:26:34.487 "name": "Nvme$subsystem", 00:26:34.487 "trtype": "$TEST_TRANSPORT", 00:26:34.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.487 "adrfam": "ipv4", 00:26:34.487 "trsvcid": "$NVMF_PORT", 00:26:34.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.487 "hdgst": ${hdgst:-false}, 00:26:34.487 "ddgst": ${ddgst:-false} 00:26:34.487 }, 00:26:34.487 "method": "bdev_nvme_attach_controller" 00:26:34.487 } 00:26:34.487 EOF 00:26:34.487 )") 00:26:34.487 23:12:06 -- nvmf/common.sh@542 -- # cat 00:26:34.487 23:12:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:34.488 23:12:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:34.488 { 00:26:34.488 "params": { 00:26:34.488 "name": "Nvme$subsystem", 00:26:34.488 "trtype": "$TEST_TRANSPORT", 00:26:34.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.488 "adrfam": "ipv4", 00:26:34.488 "trsvcid": "$NVMF_PORT", 00:26:34.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.488 "hdgst": ${hdgst:-false}, 00:26:34.488 "ddgst": ${ddgst:-false} 00:26:34.488 }, 00:26:34.488 "method": "bdev_nvme_attach_controller" 00:26:34.488 } 00:26:34.488 EOF 00:26:34.488 )") 00:26:34.488 23:12:06 -- nvmf/common.sh@542 -- # cat 00:26:34.488 23:12:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:34.488 23:12:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:34.488 { 00:26:34.488 "params": { 00:26:34.488 "name": "Nvme$subsystem", 00:26:34.488 "trtype": "$TEST_TRANSPORT", 00:26:34.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.488 "adrfam": "ipv4", 00:26:34.488 "trsvcid": "$NVMF_PORT", 00:26:34.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.488 "hdgst": ${hdgst:-false}, 00:26:34.488 "ddgst": ${ddgst:-false} 00:26:34.488 }, 00:26:34.488 "method": "bdev_nvme_attach_controller" 00:26:34.488 } 00:26:34.488 EOF 00:26:34.488 )") 00:26:34.488 23:12:06 -- nvmf/common.sh@542 -- # cat 00:26:34.488 23:12:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:34.488 23:12:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:34.488 { 00:26:34.488 "params": { 00:26:34.488 "name": "Nvme$subsystem", 00:26:34.488 "trtype": "$TEST_TRANSPORT", 00:26:34.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.488 "adrfam": "ipv4", 00:26:34.488 "trsvcid": "$NVMF_PORT", 00:26:34.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.488 "hdgst": ${hdgst:-false}, 00:26:34.488 "ddgst": ${ddgst:-false} 00:26:34.488 }, 00:26:34.488 "method": "bdev_nvme_attach_controller" 00:26:34.488 } 00:26:34.488 EOF 00:26:34.488 )") 00:26:34.488 23:12:06 -- nvmf/common.sh@542 -- # cat 00:26:34.488 23:12:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:34.488 23:12:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:34.488 { 00:26:34.488 "params": { 00:26:34.488 "name": "Nvme$subsystem", 00:26:34.488 "trtype": "$TEST_TRANSPORT", 00:26:34.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.488 "adrfam": "ipv4", 00:26:34.488 "trsvcid": "$NVMF_PORT", 00:26:34.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.488 "hdgst": ${hdgst:-false}, 00:26:34.488 "ddgst": ${ddgst:-false} 00:26:34.488 }, 00:26:34.488 "method": "bdev_nvme_attach_controller" 00:26:34.488 } 00:26:34.488 EOF 00:26:34.488 )") 00:26:34.488 [2024-07-24 23:12:06.755533] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:34.488 [2024-07-24 23:12:06.755586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3325227 ] 00:26:34.488 23:12:06 -- nvmf/common.sh@542 -- # cat 00:26:34.488 23:12:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:34.488 23:12:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:34.488 { 00:26:34.488 "params": { 00:26:34.488 "name": "Nvme$subsystem", 00:26:34.488 "trtype": "$TEST_TRANSPORT", 00:26:34.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.488 "adrfam": "ipv4", 00:26:34.488 "trsvcid": "$NVMF_PORT", 00:26:34.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.488 "hdgst": ${hdgst:-false}, 00:26:34.488 "ddgst": ${ddgst:-false} 00:26:34.488 }, 00:26:34.488 "method": "bdev_nvme_attach_controller" 00:26:34.488 } 00:26:34.488 EOF 00:26:34.488 )") 00:26:34.488 23:12:06 -- nvmf/common.sh@542 -- # cat 00:26:34.488 23:12:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:34.488 23:12:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:34.488 { 00:26:34.488 "params": { 00:26:34.488 "name": "Nvme$subsystem", 00:26:34.488 "trtype": "$TEST_TRANSPORT", 00:26:34.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.488 "adrfam": "ipv4", 00:26:34.488 "trsvcid": "$NVMF_PORT", 00:26:34.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.488 "hdgst": ${hdgst:-false}, 00:26:34.488 "ddgst": ${ddgst:-false} 00:26:34.488 }, 00:26:34.488 "method": "bdev_nvme_attach_controller" 00:26:34.488 } 00:26:34.488 EOF 00:26:34.488 )") 00:26:34.488 23:12:06 -- nvmf/common.sh@542 -- # cat 00:26:34.488 23:12:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:34.488 23:12:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:34.488 { 00:26:34.488 "params": { 00:26:34.488 "name": "Nvme$subsystem", 00:26:34.488 "trtype": "$TEST_TRANSPORT", 00:26:34.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.488 "adrfam": "ipv4", 00:26:34.488 "trsvcid": "$NVMF_PORT", 00:26:34.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.488 "hdgst": ${hdgst:-false}, 00:26:34.488 "ddgst": ${ddgst:-false} 00:26:34.488 }, 00:26:34.488 "method": "bdev_nvme_attach_controller" 00:26:34.488 } 00:26:34.488 EOF 00:26:34.488 )") 00:26:34.488 23:12:06 -- nvmf/common.sh@542 -- # cat 00:26:34.488 23:12:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:34.488 23:12:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:34.488 { 00:26:34.488 "params": { 00:26:34.488 "name": "Nvme$subsystem", 00:26:34.488 "trtype": "$TEST_TRANSPORT", 00:26:34.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.488 "adrfam": "ipv4", 00:26:34.488 "trsvcid": "$NVMF_PORT", 00:26:34.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.488 "hdgst": ${hdgst:-false}, 00:26:34.488 "ddgst": ${ddgst:-false} 00:26:34.488 }, 00:26:34.488 "method": "bdev_nvme_attach_controller" 00:26:34.488 } 00:26:34.488 EOF 00:26:34.488 )") 00:26:34.488 23:12:06 -- nvmf/common.sh@542 -- # cat 00:26:34.488 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.488 23:12:06 -- nvmf/common.sh@544 -- # jq . 00:26:34.488 23:12:06 -- nvmf/common.sh@545 -- # IFS=, 00:26:34.488 23:12:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:34.488 "params": { 00:26:34.488 "name": "Nvme1", 00:26:34.488 "trtype": "tcp", 00:26:34.488 "traddr": "10.0.0.2", 00:26:34.488 "adrfam": "ipv4", 00:26:34.488 "trsvcid": "4420", 00:26:34.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:34.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:34.488 "hdgst": false, 00:26:34.488 "ddgst": false 00:26:34.488 }, 00:26:34.488 "method": "bdev_nvme_attach_controller" 00:26:34.488 },{ 00:26:34.488 "params": { 00:26:34.488 "name": "Nvme2", 00:26:34.488 "trtype": "tcp", 00:26:34.488 "traddr": "10.0.0.2", 00:26:34.488 "adrfam": "ipv4", 00:26:34.488 "trsvcid": "4420", 00:26:34.488 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:34.488 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:34.488 "hdgst": false, 00:26:34.488 "ddgst": false 00:26:34.488 }, 00:26:34.488 "method": "bdev_nvme_attach_controller" 00:26:34.488 },{ 00:26:34.488 "params": { 00:26:34.488 "name": "Nvme3", 00:26:34.488 "trtype": "tcp", 00:26:34.488 "traddr": "10.0.0.2", 00:26:34.488 "adrfam": "ipv4", 00:26:34.488 "trsvcid": "4420", 00:26:34.488 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:34.488 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:34.488 "hdgst": false, 00:26:34.488 "ddgst": false 00:26:34.488 }, 00:26:34.488 "method": "bdev_nvme_attach_controller" 00:26:34.488 },{ 00:26:34.488 "params": { 00:26:34.488 "name": "Nvme4", 00:26:34.488 "trtype": "tcp", 00:26:34.488 "traddr": "10.0.0.2", 00:26:34.488 "adrfam": "ipv4", 00:26:34.488 "trsvcid": "4420", 00:26:34.488 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:34.488 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:34.488 "hdgst": false, 00:26:34.488 "ddgst": false 00:26:34.488 }, 00:26:34.488 "method": "bdev_nvme_attach_controller" 00:26:34.488 },{ 00:26:34.488 "params": { 00:26:34.488 "name": "Nvme5", 00:26:34.488 "trtype": "tcp", 00:26:34.488 "traddr": "10.0.0.2", 00:26:34.488 "adrfam": "ipv4", 00:26:34.488 "trsvcid": "4420", 00:26:34.489 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:34.489 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:34.489 "hdgst": false, 00:26:34.489 "ddgst": false 00:26:34.489 }, 00:26:34.489 "method": "bdev_nvme_attach_controller" 00:26:34.489 },{ 00:26:34.489 "params": { 00:26:34.489 "name": "Nvme6", 00:26:34.489 "trtype": "tcp", 00:26:34.489 "traddr": "10.0.0.2", 00:26:34.489 "adrfam": "ipv4", 00:26:34.489 "trsvcid": "4420", 00:26:34.489 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:34.489 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:34.489 "hdgst": false, 00:26:34.489 "ddgst": false 00:26:34.489 }, 00:26:34.489 "method": "bdev_nvme_attach_controller" 00:26:34.489 },{ 00:26:34.489 "params": { 00:26:34.489 "name": "Nvme7", 00:26:34.489 "trtype": "tcp", 00:26:34.489 "traddr": "10.0.0.2", 00:26:34.489 "adrfam": "ipv4", 00:26:34.489 "trsvcid": "4420", 00:26:34.489 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:34.489 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:34.489 "hdgst": false, 00:26:34.489 "ddgst": false 00:26:34.489 }, 00:26:34.489 "method": "bdev_nvme_attach_controller" 00:26:34.489 },{ 00:26:34.489 "params": { 00:26:34.489 "name": "Nvme8", 00:26:34.489 "trtype": "tcp", 00:26:34.489 "traddr": "10.0.0.2", 00:26:34.489 "adrfam": "ipv4", 00:26:34.489 "trsvcid": "4420", 00:26:34.489 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:34.489 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:34.489 "hdgst": false, 00:26:34.489 "ddgst": false 00:26:34.489 }, 00:26:34.489 "method": "bdev_nvme_attach_controller" 00:26:34.489 },{ 00:26:34.489 "params": { 00:26:34.489 "name": "Nvme9", 00:26:34.489 "trtype": "tcp", 00:26:34.489 "traddr": "10.0.0.2", 00:26:34.489 "adrfam": "ipv4", 00:26:34.489 "trsvcid": "4420", 00:26:34.489 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:34.489 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:34.489 "hdgst": false, 00:26:34.489 "ddgst": false 00:26:34.489 }, 00:26:34.489 "method": "bdev_nvme_attach_controller" 00:26:34.489 },{ 00:26:34.489 "params": { 00:26:34.489 "name": "Nvme10", 00:26:34.489 "trtype": "tcp", 00:26:34.489 "traddr": "10.0.0.2", 00:26:34.489 "adrfam": "ipv4", 00:26:34.489 "trsvcid": "4420", 00:26:34.489 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:34.489 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:34.489 "hdgst": false, 00:26:34.489 "ddgst": false 00:26:34.489 }, 00:26:34.489 "method": "bdev_nvme_attach_controller" 00:26:34.489 }' 00:26:34.489 [2024-07-24 23:12:06.827662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.489 [2024-07-24 23:12:06.863164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.867 Running I/O for 10 seconds... 00:26:35.867 23:12:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:35.867 23:12:08 -- common/autotest_common.sh@852 -- # return 0 00:26:35.867 23:12:08 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:35.867 23:12:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:35.867 23:12:08 -- common/autotest_common.sh@10 -- # set +x 00:26:35.867 23:12:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:35.867 23:12:08 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:35.867 23:12:08 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:35.867 23:12:08 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:35.867 23:12:08 -- target/shutdown.sh@57 -- # local ret=1 00:26:35.867 23:12:08 -- target/shutdown.sh@58 -- # local i 00:26:35.867 23:12:08 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:35.867 23:12:08 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:35.867 23:12:08 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:35.867 23:12:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:35.867 23:12:08 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:35.867 23:12:08 -- common/autotest_common.sh@10 -- # set +x 00:26:35.867 23:12:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:35.867 23:12:08 -- target/shutdown.sh@60 -- # read_io_count=42 00:26:36.127 23:12:08 -- target/shutdown.sh@63 -- # '[' 42 -ge 100 ']' 00:26:36.127 23:12:08 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:36.127 23:12:08 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:36.127 23:12:08 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:36.127 23:12:08 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:36.127 23:12:08 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:36.127 23:12:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.386 23:12:08 -- common/autotest_common.sh@10 -- # set +x 00:26:36.386 23:12:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.386 23:12:08 -- target/shutdown.sh@60 -- # read_io_count=168 00:26:36.386 23:12:08 -- target/shutdown.sh@63 -- # '[' 168 -ge 100 ']' 00:26:36.386 23:12:08 -- target/shutdown.sh@64 -- # ret=0 00:26:36.386 23:12:08 -- target/shutdown.sh@65 -- # break 00:26:36.386 23:12:08 -- target/shutdown.sh@69 -- # return 0 00:26:36.386 23:12:08 -- target/shutdown.sh@109 -- # killprocess 3325227 00:26:36.386 23:12:08 -- common/autotest_common.sh@926 -- # '[' -z 3325227 ']' 00:26:36.386 23:12:08 -- common/autotest_common.sh@930 -- # kill -0 3325227 00:26:36.386 23:12:08 -- common/autotest_common.sh@931 -- # uname 00:26:36.386 23:12:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:36.386 23:12:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3325227 00:26:36.386 23:12:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:36.386 23:12:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:36.386 23:12:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3325227' 00:26:36.386 killing process with pid 3325227 00:26:36.386 23:12:08 -- common/autotest_common.sh@945 -- # kill 3325227 00:26:36.386 23:12:08 -- common/autotest_common.sh@950 -- # wait 3325227 00:26:36.386 Received shutdown signal, test time was about 0.598238 seconds 00:26:36.386 00:26:36.386 Latency(us) 00:26:36.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.386 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.386 Verification LBA range: start 0x0 length 0x400 00:26:36.386 Nvme1n1 : 0.56 493.86 30.87 0.00 0.00 125693.09 8074.04 106535.32 00:26:36.386 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.386 Verification LBA range: start 0x0 length 0x400 00:26:36.386 Nvme2n1 : 0.56 483.29 30.21 0.00 0.00 127176.27 18035.51 116601.65 00:26:36.386 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.386 Verification LBA range: start 0x0 length 0x400 00:26:36.386 Nvme3n1 : 0.58 546.27 34.14 0.00 0.00 111804.17 15414.07 98985.57 00:26:36.386 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.386 Verification LBA range: start 0x0 length 0x400 00:26:36.386 Nvme4n1 : 0.60 455.15 28.45 0.00 0.00 124121.91 18874.37 98146.71 00:26:36.386 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.386 Verification LBA range: start 0x0 length 0x400 00:26:36.386 Nvme5n1 : 0.58 544.27 34.02 0.00 0.00 109923.72 13841.20 111568.49 00:26:36.386 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.386 Verification LBA range: start 0x0 length 0x400 00:26:36.386 Nvme6n1 : 0.58 547.59 34.22 0.00 0.00 107608.20 16567.50 96049.56 00:26:36.386 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.386 Verification LBA range: start 0x0 length 0x400 00:26:36.386 Nvme7n1 : 0.57 549.06 34.32 0.00 0.00 105785.67 18454.94 88080.38 00:26:36.387 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.387 Verification LBA range: start 0x0 length 0x400 00:26:36.387 Nvme8n1 : 0.57 478.87 29.93 0.00 0.00 119524.75 19188.94 100663.30 00:26:36.387 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.387 Verification LBA range: start 0x0 length 0x400 00:26:36.387 Nvme9n1 : 0.57 482.71 30.17 0.00 0.00 118247.93 12215.91 114085.07 00:26:36.387 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.387 Verification LBA range: start 0x0 length 0x400 00:26:36.387 Nvme10n1 : 0.57 485.76 30.36 0.00 0.00 115028.22 10013.90 97307.85 00:26:36.387 =================================================================================================================== 00:26:36.387 Total : 5066.84 316.68 0.00 0.00 116044.79 8074.04 116601.65 00:26:36.646 23:12:08 -- target/shutdown.sh@112 -- # sleep 1 00:26:37.582 23:12:09 -- target/shutdown.sh@113 -- # kill -0 3324697 00:26:37.582 23:12:09 -- target/shutdown.sh@115 -- # stoptarget 00:26:37.582 23:12:09 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:37.582 23:12:09 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:37.582 23:12:09 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:37.582 23:12:09 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:37.582 23:12:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:37.582 23:12:09 -- nvmf/common.sh@116 -- # sync 00:26:37.582 23:12:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:37.582 23:12:09 -- nvmf/common.sh@119 -- # set +e 00:26:37.582 23:12:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:37.582 23:12:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:37.582 rmmod nvme_tcp 00:26:37.582 rmmod nvme_fabrics 00:26:37.582 rmmod nvme_keyring 00:26:37.582 23:12:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:37.582 23:12:09 -- nvmf/common.sh@123 -- # set -e 00:26:37.582 23:12:09 -- nvmf/common.sh@124 -- # return 0 00:26:37.583 23:12:09 -- nvmf/common.sh@477 -- # '[' -n 3324697 ']' 00:26:37.583 23:12:09 -- nvmf/common.sh@478 -- # killprocess 3324697 00:26:37.583 23:12:09 -- common/autotest_common.sh@926 -- # '[' -z 3324697 ']' 00:26:37.583 23:12:09 -- common/autotest_common.sh@930 -- # kill -0 3324697 00:26:37.583 23:12:09 -- common/autotest_common.sh@931 -- # uname 00:26:37.583 23:12:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:37.583 23:12:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3324697 00:26:37.841 23:12:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:37.841 23:12:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:37.841 23:12:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3324697' 00:26:37.841 killing process with pid 3324697 00:26:37.841 23:12:10 -- common/autotest_common.sh@945 -- # kill 3324697 00:26:37.841 23:12:10 -- common/autotest_common.sh@950 -- # wait 3324697 00:26:38.100 23:12:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:38.100 23:12:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:38.100 23:12:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:38.100 23:12:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:38.100 23:12:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:38.100 23:12:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.100 23:12:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:38.100 23:12:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.636 23:12:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:40.636 00:26:40.636 real 0m7.617s 00:26:40.636 user 0m22.067s 00:26:40.636 sys 0m1.531s 00:26:40.636 23:12:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:40.636 23:12:12 -- common/autotest_common.sh@10 -- # set +x 00:26:40.636 ************************************ 00:26:40.636 END TEST nvmf_shutdown_tc2 00:26:40.636 ************************************ 00:26:40.636 23:12:12 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:40.636 23:12:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:40.636 23:12:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:40.636 23:12:12 -- common/autotest_common.sh@10 -- # set +x 00:26:40.636 ************************************ 00:26:40.636 START TEST nvmf_shutdown_tc3 00:26:40.636 ************************************ 00:26:40.636 23:12:12 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:26:40.636 23:12:12 -- target/shutdown.sh@120 -- # starttarget 00:26:40.636 23:12:12 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:40.636 23:12:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:40.636 23:12:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.636 23:12:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:40.636 23:12:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:40.636 23:12:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:40.636 23:12:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.636 23:12:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:40.636 23:12:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.636 23:12:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:40.636 23:12:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:40.636 23:12:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:40.636 23:12:12 -- common/autotest_common.sh@10 -- # set +x 00:26:40.636 23:12:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:40.636 23:12:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:40.636 23:12:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:40.636 23:12:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:40.636 23:12:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:40.636 23:12:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:40.636 23:12:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:40.636 23:12:12 -- nvmf/common.sh@294 -- # net_devs=() 00:26:40.636 23:12:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:40.636 23:12:12 -- nvmf/common.sh@295 -- # e810=() 00:26:40.636 23:12:12 -- nvmf/common.sh@295 -- # local -ga e810 00:26:40.636 23:12:12 -- nvmf/common.sh@296 -- # x722=() 00:26:40.636 23:12:12 -- nvmf/common.sh@296 -- # local -ga x722 00:26:40.636 23:12:12 -- nvmf/common.sh@297 -- # mlx=() 00:26:40.636 23:12:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:40.636 23:12:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.636 23:12:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.637 23:12:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.637 23:12:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.637 23:12:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.637 23:12:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.637 23:12:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.637 23:12:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.637 23:12:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.637 23:12:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.637 23:12:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.637 23:12:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:40.637 23:12:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:40.637 23:12:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:40.637 23:12:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:40.637 23:12:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:40.637 23:12:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:40.637 23:12:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:40.637 23:12:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:40.637 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:40.637 23:12:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:40.637 23:12:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:40.637 23:12:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.637 23:12:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.637 23:12:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:40.637 23:12:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:40.637 23:12:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:40.637 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:40.637 23:12:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:40.637 23:12:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:40.637 23:12:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.637 23:12:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.637 23:12:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:40.637 23:12:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:40.637 23:12:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:40.637 23:12:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:40.637 23:12:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:40.637 23:12:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.637 23:12:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:40.637 23:12:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.637 23:12:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:40.637 Found net devices under 0000:af:00.0: cvl_0_0 00:26:40.637 23:12:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.637 23:12:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:40.637 23:12:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.637 23:12:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:40.637 23:12:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.637 23:12:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:40.637 Found net devices under 0000:af:00.1: cvl_0_1 00:26:40.637 23:12:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.637 23:12:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:40.637 23:12:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:40.637 23:12:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:40.637 23:12:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:40.637 23:12:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:40.637 23:12:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.637 23:12:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.637 23:12:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.637 23:12:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:40.637 23:12:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.637 23:12:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.637 23:12:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:40.637 23:12:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.637 23:12:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.637 23:12:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:40.637 23:12:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:40.637 23:12:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.637 23:12:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.637 23:12:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.637 23:12:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.637 23:12:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:40.637 23:12:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.637 23:12:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.637 23:12:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.637 23:12:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:40.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:26:40.637 00:26:40.637 --- 10.0.0.2 ping statistics --- 00:26:40.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.637 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:26:40.637 23:12:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:26:40.637 00:26:40.637 --- 10.0.0.1 ping statistics --- 00:26:40.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.637 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:26:40.637 23:12:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.637 23:12:12 -- nvmf/common.sh@410 -- # return 0 00:26:40.637 23:12:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:40.637 23:12:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.637 23:12:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:40.637 23:12:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:40.637 23:12:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.637 23:12:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:40.637 23:12:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:40.637 23:12:12 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:40.637 23:12:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:40.637 23:12:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:40.637 23:12:12 -- common/autotest_common.sh@10 -- # set +x 00:26:40.637 23:12:12 -- nvmf/common.sh@469 -- # nvmfpid=3326419 00:26:40.637 23:12:12 -- nvmf/common.sh@470 -- # waitforlisten 3326419 00:26:40.637 23:12:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:40.637 23:12:12 -- common/autotest_common.sh@819 -- # '[' -z 3326419 ']' 00:26:40.637 23:12:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.637 23:12:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:40.637 23:12:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.637 23:12:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:40.637 23:12:12 -- common/autotest_common.sh@10 -- # set +x 00:26:40.637 [2024-07-24 23:12:12.991739] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:40.637 [2024-07-24 23:12:12.991791] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.637 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.895 [2024-07-24 23:12:13.068391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:40.895 [2024-07-24 23:12:13.107337] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:40.895 [2024-07-24 23:12:13.107446] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.895 [2024-07-24 23:12:13.107456] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.895 [2024-07-24 23:12:13.107464] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.895 [2024-07-24 23:12:13.107563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:40.896 [2024-07-24 23:12:13.107646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:40.896 [2024-07-24 23:12:13.107754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.896 [2024-07-24 23:12:13.107756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:41.463 23:12:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:41.463 23:12:13 -- common/autotest_common.sh@852 -- # return 0 00:26:41.463 23:12:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:41.463 23:12:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:41.463 23:12:13 -- common/autotest_common.sh@10 -- # set +x 00:26:41.463 23:12:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.463 23:12:13 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:41.463 23:12:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.463 23:12:13 -- common/autotest_common.sh@10 -- # set +x 00:26:41.463 [2024-07-24 23:12:13.839039] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.463 23:12:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.463 23:12:13 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:41.463 23:12:13 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:41.463 23:12:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:41.463 23:12:13 -- common/autotest_common.sh@10 -- # set +x 00:26:41.463 23:12:13 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:41.463 23:12:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:41.463 23:12:13 -- target/shutdown.sh@28 -- # cat 00:26:41.463 23:12:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:41.463 23:12:13 -- target/shutdown.sh@28 -- # cat 00:26:41.463 23:12:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:41.463 23:12:13 -- target/shutdown.sh@28 -- # cat 00:26:41.463 23:12:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:41.463 23:12:13 -- target/shutdown.sh@28 -- # cat 00:26:41.463 23:12:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:41.463 23:12:13 -- target/shutdown.sh@28 -- # cat 00:26:41.463 23:12:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:41.463 23:12:13 -- target/shutdown.sh@28 -- # cat 00:26:41.463 23:12:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:41.463 23:12:13 -- target/shutdown.sh@28 -- # cat 00:26:41.722 23:12:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:41.722 23:12:13 -- target/shutdown.sh@28 -- # cat 00:26:41.722 23:12:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:41.722 23:12:13 -- target/shutdown.sh@28 -- # cat 00:26:41.722 23:12:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:41.722 23:12:13 -- target/shutdown.sh@28 -- # cat 00:26:41.722 23:12:13 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:41.722 23:12:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.722 23:12:13 -- common/autotest_common.sh@10 -- # set +x 00:26:41.722 Malloc1 00:26:41.722 [2024-07-24 23:12:13.949630] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.722 Malloc2 00:26:41.722 Malloc3 00:26:41.722 Malloc4 00:26:41.722 Malloc5 00:26:41.722 Malloc6 00:26:41.981 Malloc7 00:26:41.981 Malloc8 00:26:41.981 Malloc9 00:26:41.981 Malloc10 00:26:41.981 23:12:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.981 23:12:14 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:41.981 23:12:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:41.981 23:12:14 -- common/autotest_common.sh@10 -- # set +x 00:26:41.981 23:12:14 -- target/shutdown.sh@124 -- # perfpid=3326731 00:26:41.981 23:12:14 -- target/shutdown.sh@125 -- # waitforlisten 3326731 /var/tmp/bdevperf.sock 00:26:41.981 23:12:14 -- common/autotest_common.sh@819 -- # '[' -z 3326731 ']' 00:26:41.981 23:12:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:41.981 23:12:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:41.981 23:12:14 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:41.981 23:12:14 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:41.981 23:12:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:41.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:41.981 23:12:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:41.981 23:12:14 -- nvmf/common.sh@520 -- # config=() 00:26:41.981 23:12:14 -- common/autotest_common.sh@10 -- # set +x 00:26:41.981 23:12:14 -- nvmf/common.sh@520 -- # local subsystem config 00:26:41.981 23:12:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:41.981 23:12:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:41.981 { 00:26:41.981 "params": { 00:26:41.981 "name": "Nvme$subsystem", 00:26:41.981 "trtype": "$TEST_TRANSPORT", 00:26:41.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:41.981 "adrfam": "ipv4", 00:26:41.981 "trsvcid": "$NVMF_PORT", 00:26:41.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:41.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:41.981 "hdgst": ${hdgst:-false}, 00:26:41.981 "ddgst": ${ddgst:-false} 00:26:41.981 }, 00:26:41.981 "method": "bdev_nvme_attach_controller" 00:26:41.981 } 00:26:41.981 EOF 00:26:41.981 )") 00:26:41.981 23:12:14 -- nvmf/common.sh@542 -- # cat 00:26:41.981 23:12:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:41.981 23:12:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:41.981 { 00:26:41.981 "params": { 00:26:41.981 "name": "Nvme$subsystem", 00:26:41.981 "trtype": "$TEST_TRANSPORT", 00:26:41.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:41.981 "adrfam": "ipv4", 00:26:41.981 "trsvcid": "$NVMF_PORT", 00:26:41.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:41.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:41.981 "hdgst": ${hdgst:-false}, 00:26:41.981 "ddgst": ${ddgst:-false} 00:26:41.981 }, 00:26:41.981 "method": "bdev_nvme_attach_controller" 00:26:41.981 } 00:26:41.981 EOF 00:26:41.981 )") 00:26:41.981 23:12:14 -- nvmf/common.sh@542 -- # cat 00:26:41.981 23:12:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:41.981 23:12:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:41.981 { 00:26:41.981 "params": { 00:26:41.981 "name": "Nvme$subsystem", 00:26:41.981 "trtype": "$TEST_TRANSPORT", 00:26:41.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:41.981 "adrfam": "ipv4", 00:26:41.981 "trsvcid": "$NVMF_PORT", 00:26:41.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:41.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:41.981 "hdgst": ${hdgst:-false}, 00:26:41.981 "ddgst": ${ddgst:-false} 00:26:41.981 }, 00:26:41.981 "method": "bdev_nvme_attach_controller" 00:26:41.981 } 00:26:41.981 EOF 00:26:41.981 )") 00:26:41.981 23:12:14 -- nvmf/common.sh@542 -- # cat 00:26:42.240 23:12:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:42.240 23:12:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:42.240 { 00:26:42.240 "params": { 00:26:42.240 "name": "Nvme$subsystem", 00:26:42.240 "trtype": "$TEST_TRANSPORT", 00:26:42.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.240 "adrfam": "ipv4", 00:26:42.240 "trsvcid": "$NVMF_PORT", 00:26:42.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.241 "hdgst": ${hdgst:-false}, 00:26:42.241 "ddgst": ${ddgst:-false} 00:26:42.241 }, 00:26:42.241 "method": "bdev_nvme_attach_controller" 00:26:42.241 } 00:26:42.241 EOF 00:26:42.241 )") 00:26:42.241 23:12:14 -- nvmf/common.sh@542 -- # cat 00:26:42.241 23:12:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:42.241 23:12:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:42.241 { 00:26:42.241 "params": { 00:26:42.241 "name": "Nvme$subsystem", 00:26:42.241 "trtype": "$TEST_TRANSPORT", 00:26:42.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.241 "adrfam": "ipv4", 00:26:42.241 "trsvcid": "$NVMF_PORT", 00:26:42.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.241 "hdgst": ${hdgst:-false}, 00:26:42.241 "ddgst": ${ddgst:-false} 00:26:42.241 }, 00:26:42.241 "method": "bdev_nvme_attach_controller" 00:26:42.241 } 00:26:42.241 EOF 00:26:42.241 )") 00:26:42.241 23:12:14 -- nvmf/common.sh@542 -- # cat 00:26:42.241 [2024-07-24 23:12:14.428821] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:42.241 [2024-07-24 23:12:14.428874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3326731 ] 00:26:42.241 23:12:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:42.241 23:12:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:42.241 { 00:26:42.241 "params": { 00:26:42.241 "name": "Nvme$subsystem", 00:26:42.241 "trtype": "$TEST_TRANSPORT", 00:26:42.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.241 "adrfam": "ipv4", 00:26:42.241 "trsvcid": "$NVMF_PORT", 00:26:42.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.241 "hdgst": ${hdgst:-false}, 00:26:42.241 "ddgst": ${ddgst:-false} 00:26:42.241 }, 00:26:42.241 "method": "bdev_nvme_attach_controller" 00:26:42.241 } 00:26:42.241 EOF 00:26:42.241 )") 00:26:42.241 23:12:14 -- nvmf/common.sh@542 -- # cat 00:26:42.241 23:12:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:42.241 23:12:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:42.241 { 00:26:42.241 "params": { 00:26:42.241 "name": "Nvme$subsystem", 00:26:42.241 "trtype": "$TEST_TRANSPORT", 00:26:42.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.241 "adrfam": "ipv4", 00:26:42.241 "trsvcid": "$NVMF_PORT", 00:26:42.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.241 "hdgst": ${hdgst:-false}, 00:26:42.241 "ddgst": ${ddgst:-false} 00:26:42.241 }, 00:26:42.241 "method": "bdev_nvme_attach_controller" 00:26:42.241 } 00:26:42.241 EOF 00:26:42.241 )") 00:26:42.241 23:12:14 -- nvmf/common.sh@542 -- # cat 00:26:42.241 23:12:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:42.241 23:12:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:42.241 { 00:26:42.241 "params": { 00:26:42.241 "name": "Nvme$subsystem", 00:26:42.241 "trtype": "$TEST_TRANSPORT", 00:26:42.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.241 "adrfam": "ipv4", 00:26:42.241 "trsvcid": "$NVMF_PORT", 00:26:42.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.241 "hdgst": ${hdgst:-false}, 00:26:42.241 "ddgst": ${ddgst:-false} 00:26:42.241 }, 00:26:42.241 "method": "bdev_nvme_attach_controller" 00:26:42.241 } 00:26:42.241 EOF 00:26:42.241 )") 00:26:42.241 23:12:14 -- nvmf/common.sh@542 -- # cat 00:26:42.241 23:12:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:42.241 23:12:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:42.241 { 00:26:42.241 "params": { 00:26:42.241 "name": "Nvme$subsystem", 00:26:42.241 "trtype": "$TEST_TRANSPORT", 00:26:42.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.241 "adrfam": "ipv4", 00:26:42.241 "trsvcid": "$NVMF_PORT", 00:26:42.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.241 "hdgst": ${hdgst:-false}, 00:26:42.241 "ddgst": ${ddgst:-false} 00:26:42.241 }, 00:26:42.241 "method": "bdev_nvme_attach_controller" 00:26:42.241 } 00:26:42.241 EOF 00:26:42.241 )") 00:26:42.241 23:12:14 -- nvmf/common.sh@542 -- # cat 00:26:42.241 23:12:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:42.241 23:12:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:42.241 { 00:26:42.241 "params": { 00:26:42.241 "name": "Nvme$subsystem", 00:26:42.241 "trtype": "$TEST_TRANSPORT", 00:26:42.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.241 "adrfam": "ipv4", 00:26:42.241 "trsvcid": "$NVMF_PORT", 00:26:42.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.241 "hdgst": ${hdgst:-false}, 00:26:42.241 "ddgst": ${ddgst:-false} 00:26:42.241 }, 00:26:42.241 "method": "bdev_nvme_attach_controller" 00:26:42.241 } 00:26:42.241 EOF 00:26:42.241 )") 00:26:42.241 23:12:14 -- nvmf/common.sh@542 -- # cat 00:26:42.241 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.241 23:12:14 -- nvmf/common.sh@544 -- # jq . 00:26:42.241 23:12:14 -- nvmf/common.sh@545 -- # IFS=, 00:26:42.241 23:12:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:42.241 "params": { 00:26:42.241 "name": "Nvme1", 00:26:42.241 "trtype": "tcp", 00:26:42.241 "traddr": "10.0.0.2", 00:26:42.241 "adrfam": "ipv4", 00:26:42.241 "trsvcid": "4420", 00:26:42.241 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:42.241 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:42.241 "hdgst": false, 00:26:42.241 "ddgst": false 00:26:42.241 }, 00:26:42.241 "method": "bdev_nvme_attach_controller" 00:26:42.241 },{ 00:26:42.241 "params": { 00:26:42.241 "name": "Nvme2", 00:26:42.241 "trtype": "tcp", 00:26:42.241 "traddr": "10.0.0.2", 00:26:42.241 "adrfam": "ipv4", 00:26:42.241 "trsvcid": "4420", 00:26:42.241 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:42.241 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:42.241 "hdgst": false, 00:26:42.241 "ddgst": false 00:26:42.241 }, 00:26:42.241 "method": "bdev_nvme_attach_controller" 00:26:42.241 },{ 00:26:42.241 "params": { 00:26:42.241 "name": "Nvme3", 00:26:42.241 "trtype": "tcp", 00:26:42.241 "traddr": "10.0.0.2", 00:26:42.241 "adrfam": "ipv4", 00:26:42.241 "trsvcid": "4420", 00:26:42.241 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:42.241 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:42.241 "hdgst": false, 00:26:42.241 "ddgst": false 00:26:42.241 }, 00:26:42.241 "method": "bdev_nvme_attach_controller" 00:26:42.241 },{ 00:26:42.241 "params": { 00:26:42.241 "name": "Nvme4", 00:26:42.241 "trtype": "tcp", 00:26:42.241 "traddr": "10.0.0.2", 00:26:42.241 "adrfam": "ipv4", 00:26:42.241 "trsvcid": "4420", 00:26:42.241 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:42.241 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:42.241 "hdgst": false, 00:26:42.241 "ddgst": false 00:26:42.241 }, 00:26:42.241 "method": "bdev_nvme_attach_controller" 00:26:42.241 },{ 00:26:42.241 "params": { 00:26:42.241 "name": "Nvme5", 00:26:42.241 "trtype": "tcp", 00:26:42.241 "traddr": "10.0.0.2", 00:26:42.241 "adrfam": "ipv4", 00:26:42.241 "trsvcid": "4420", 00:26:42.241 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:42.241 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:42.241 "hdgst": false, 00:26:42.241 "ddgst": false 00:26:42.241 }, 00:26:42.241 "method": "bdev_nvme_attach_controller" 00:26:42.241 },{ 00:26:42.241 "params": { 00:26:42.241 "name": "Nvme6", 00:26:42.241 "trtype": "tcp", 00:26:42.241 "traddr": "10.0.0.2", 00:26:42.241 "adrfam": "ipv4", 00:26:42.241 "trsvcid": "4420", 00:26:42.241 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:42.241 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:42.241 "hdgst": false, 00:26:42.241 "ddgst": false 00:26:42.241 }, 00:26:42.241 "method": "bdev_nvme_attach_controller" 00:26:42.241 },{ 00:26:42.241 "params": { 00:26:42.241 "name": "Nvme7", 00:26:42.241 "trtype": "tcp", 00:26:42.241 "traddr": "10.0.0.2", 00:26:42.241 "adrfam": "ipv4", 00:26:42.241 "trsvcid": "4420", 00:26:42.241 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:42.241 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:42.241 "hdgst": false, 00:26:42.241 "ddgst": false 00:26:42.241 }, 00:26:42.241 "method": "bdev_nvme_attach_controller" 00:26:42.241 },{ 00:26:42.241 "params": { 00:26:42.241 "name": "Nvme8", 00:26:42.241 "trtype": "tcp", 00:26:42.241 "traddr": "10.0.0.2", 00:26:42.241 "adrfam": "ipv4", 00:26:42.241 "trsvcid": "4420", 00:26:42.241 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:42.241 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:42.241 "hdgst": false, 00:26:42.241 "ddgst": false 00:26:42.241 }, 00:26:42.241 "method": "bdev_nvme_attach_controller" 00:26:42.241 },{ 00:26:42.241 "params": { 00:26:42.242 "name": "Nvme9", 00:26:42.242 "trtype": "tcp", 00:26:42.242 "traddr": "10.0.0.2", 00:26:42.242 "adrfam": "ipv4", 00:26:42.242 "trsvcid": "4420", 00:26:42.242 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:42.242 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:42.242 "hdgst": false, 00:26:42.242 "ddgst": false 00:26:42.242 }, 00:26:42.242 "method": "bdev_nvme_attach_controller" 00:26:42.242 },{ 00:26:42.242 "params": { 00:26:42.242 "name": "Nvme10", 00:26:42.242 "trtype": "tcp", 00:26:42.242 "traddr": "10.0.0.2", 00:26:42.242 "adrfam": "ipv4", 00:26:42.242 "trsvcid": "4420", 00:26:42.242 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:42.242 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:42.242 "hdgst": false, 00:26:42.242 "ddgst": false 00:26:42.242 }, 00:26:42.242 "method": "bdev_nvme_attach_controller" 00:26:42.242 }' 00:26:42.242 [2024-07-24 23:12:14.503527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.242 [2024-07-24 23:12:14.539285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.179 Running I/O for 10 seconds... 00:26:44.458 23:12:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:44.458 23:12:16 -- common/autotest_common.sh@852 -- # return 0 00:26:44.458 23:12:16 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:44.458 23:12:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:44.458 23:12:16 -- common/autotest_common.sh@10 -- # set +x 00:26:44.458 23:12:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:44.458 23:12:16 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:44.458 23:12:16 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:44.458 23:12:16 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:44.458 23:12:16 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:44.458 23:12:16 -- target/shutdown.sh@57 -- # local ret=1 00:26:44.458 23:12:16 -- target/shutdown.sh@58 -- # local i 00:26:44.458 23:12:16 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:44.458 23:12:16 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:44.458 23:12:16 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:44.458 23:12:16 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:44.458 23:12:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:44.458 23:12:16 -- common/autotest_common.sh@10 -- # set +x 00:26:44.458 23:12:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:44.458 23:12:16 -- target/shutdown.sh@60 -- # read_io_count=211 00:26:44.458 23:12:16 -- target/shutdown.sh@63 -- # '[' 211 -ge 100 ']' 00:26:44.458 23:12:16 -- target/shutdown.sh@64 -- # ret=0 00:26:44.458 23:12:16 -- target/shutdown.sh@65 -- # break 00:26:44.458 23:12:16 -- target/shutdown.sh@69 -- # return 0 00:26:44.458 23:12:16 -- target/shutdown.sh@134 -- # killprocess 3326419 00:26:44.458 23:12:16 -- common/autotest_common.sh@926 -- # '[' -z 3326419 ']' 00:26:44.458 23:12:16 -- common/autotest_common.sh@930 -- # kill -0 3326419 00:26:44.458 23:12:16 -- common/autotest_common.sh@931 -- # uname 00:26:44.458 23:12:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:44.458 23:12:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3326419 00:26:44.458 23:12:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:44.458 23:12:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:44.458 23:12:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3326419' 00:26:44.458 killing process with pid 3326419 00:26:44.458 23:12:16 -- common/autotest_common.sh@945 -- # kill 3326419 00:26:44.458 23:12:16 -- common/autotest_common.sh@950 -- # wait 3326419 00:26:44.458 [2024-07-24 23:12:16.712317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.712876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9faed0 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.714304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.714337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.714348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.714359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.714369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.714378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.458 [2024-07-24 23:12:16.714387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.714629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd860 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34816 len:128[2024-07-24 23:12:16.715261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.459 he state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.459 [2024-07-24 23:12:16.715283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.459 [2024-07-24 23:12:16.715305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.459 [2024-07-24 23:12:16.715314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128[2024-07-24 23:12:16.715324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.459 he state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with t[2024-07-24 23:12:16.715334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:26:44.459 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.459 [2024-07-24 23:12:16.715346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.459 [2024-07-24 23:12:16.715355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.459 [2024-07-24 23:12:16.715364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.459 [2024-07-24 23:12:16.715373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.459 [2024-07-24 23:12:16.715383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29440 len:128[2024-07-24 23:12:16.715393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.459 he state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 23:12:16.715405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.459 he state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.459 [2024-07-24 23:12:16.715429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.459 [2024-07-24 23:12:16.715439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.459 [2024-07-24 23:12:16.715450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.459 [2024-07-24 23:12:16.715461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.459 [2024-07-24 23:12:16.715470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.459 [2024-07-24 23:12:16.715480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.459 [2024-07-24 23:12:16.715489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.459 [2024-07-24 23:12:16.715501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with t[2024-07-24 23:12:16.715511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35328 len:12he state(5) to be set 00:26:44.459 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.459 [2024-07-24 23:12:16.715521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.459 [2024-07-24 23:12:16.715531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.459 [2024-07-24 23:12:16.715535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.715542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.715552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.715562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.715572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30720 len:12[2024-07-24 23:12:16.715581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 he state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 23:12:16.715595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 he state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.715615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.715624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.715635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.715645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.715654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with t[2024-07-24 23:12:16.715665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:26:44.460 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.715677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.715686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.715696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.715707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 23:12:16.715721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 he state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with t[2024-07-24 23:12:16.715734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32640 len:12he state(5) to be set 00:26:44.460 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.715746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with t[2024-07-24 23:12:16.715747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:26:44.460 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.715758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.715768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.715777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.715786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.715797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35968 len:12[2024-07-24 23:12:16.715806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 he state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with t[2024-07-24 23:12:16.715816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:26:44.460 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.715830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.715840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.715849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.715858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb380 is same with the state(5) to be set 00:26:44.460 [2024-07-24 23:12:16.715865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.715877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.715886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.715899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.715908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.715918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.715927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.715938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.715947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.715958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.715967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.715978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.715987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.715998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.716007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.716017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.716028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.716038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.716048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.716058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.716067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.716077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.460 [2024-07-24 23:12:16.716086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.460 [2024-07-24 23:12:16.716097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.461 [2024-07-24 23:12:16.716645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.461 [2024-07-24 23:12:16.716742] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c65df0 was disconnected and freed. reset controller. 00:26:44.461 [2024-07-24 23:12:16.717328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.461 [2024-07-24 23:12:16.717356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.461 [2024-07-24 23:12:16.717366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.461 [2024-07-24 23:12:16.717375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.461 [2024-07-24 23:12:16.717384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.461 [2024-07-24 23:12:16.717393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.461 [2024-07-24 23:12:16.717403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.461 [2024-07-24 23:12:16.717412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.461 [2024-07-24 23:12:16.717420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.461 [2024-07-24 23:12:16.717429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.461 [2024-07-24 23:12:16.717438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.461 [2024-07-24 23:12:16.717450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.461 [2024-07-24 23:12:16.717459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.461 [2024-07-24 23:12:16.717468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.461 [2024-07-24 23:12:16.717476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.461 [2024-07-24 23:12:16.717485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.461 [2024-07-24 23:12:16.717494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.461 [2024-07-24 23:12:16.717502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.461 [2024-07-24 23:12:16.717511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.461 [2024-07-24 23:12:16.717520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.461 [2024-07-24 23:12:16.717529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.461 [2024-07-24 23:12:16.717537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.717873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb830 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.462 [2024-07-24 23:12:16.718958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.718966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.718975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.718984] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.718992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fbcc0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.463 [2024-07-24 23:12:16.719649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.463 [2024-07-24 23:12:16.719660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.463 [2024-07-24 23:12:16.719669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.463 [2024-07-24 23:12:16.719679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.463 [2024-07-24 23:12:16.719688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.463 [2024-07-24 23:12:16.719698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.463 [2024-07-24 23:12:16.719707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.463 [2024-07-24 23:12:16.719723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad5aa0 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.463 [2024-07-24 23:12:16.719764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.463 [2024-07-24 23:12:16.719777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.463 [2024-07-24 23:12:16.719789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-24 23:12:16.719785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.463 he state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.463 [2024-07-24 23:12:16.719804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.463 [2024-07-24 23:12:16.719814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.463 [2024-07-24 23:12:16.719823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.463 [2024-07-24 23:12:16.719832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aadc10 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.463 [2024-07-24 23:12:16.719886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.463 [2024-07-24 23:12:16.719896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.463 [2024-07-24 23:12:16.719905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.463 [2024-07-24 23:12:16.719914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-24 23:12:16.719924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with tid:0 cdw10:00000000 cdw11:00000000 00:26:44.463 he state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with t[2024-07-24 23:12:16.719936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:26:44.463 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.463 [2024-07-24 23:12:16.719946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.463 [2024-07-24 23:12:16.719956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.463 [2024-07-24 23:12:16.719965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8560 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.463 [2024-07-24 23:12:16.719992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.464 [2024-07-24 23:12:16.720010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.464 [2024-07-24 23:12:16.720019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.464 [2024-07-24 23:12:16.720029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.464 [2024-07-24 23:12:16.720038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.464 [2024-07-24 23:12:16.720047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.464 [2024-07-24 23:12:16.720056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-24 23:12:16.720065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with tid:0 cdw10:00000000 cdw11:00000000 00:26:44.464 he state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-24 23:12:16.720076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.464 he state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaeb0 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.464 [2024-07-24 23:12:16.720133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.464 [2024-07-24 23:12:16.720142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.464 [2024-07-24 23:12:16.720152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.464 [2024-07-24 23:12:16.720161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.464 [2024-07-24 23:12:16.720171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.464 [2024-07-24 23:12:16.720180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.464 [2024-07-24 23:12:16.720190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-24 23:12:16.720200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.464 he state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720209] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff860 is same [2024-07-24 23:12:16.720210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with twith the state(5) to be set 00:26:44.464 he state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.720379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc170 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.464 [2024-07-24 23:12:16.721413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc600 is same with the state(5) to be set 00:26:44.465 [2024-07-24 23:12:16.721824] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.465 [2024-07-24 23:12:16.721859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aaaeb0 (9): Bad file descriptor 00:26:44.465 [2024-07-24 23:12:16.722749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.722772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.722788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.722798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.722809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.722818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.722830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.722839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.722849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.722858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.722869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.722877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.722888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.722897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.722908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.722920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.722931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.722940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.722951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.722960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.722972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.722981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.722992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.723001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.723012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.723021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.723031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.723040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.723052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.723061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.723072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.723081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.723091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.723100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.723111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.723120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.723130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.723139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.723150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.723159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.723171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.723181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.723191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.723202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.723213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.723222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.723233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.723242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.723253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.723264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.723274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.723284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.723294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.723303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.465 [2024-07-24 23:12:16.723314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.465 [2024-07-24 23:12:16.723323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.723982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.723993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.724002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.724012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.724021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.724032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.466 [2024-07-24 23:12:16.724040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.466 [2024-07-24 23:12:16.724121] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c67d20 was disconnected and freed. reset controller. 00:26:44.466 [2024-07-24 23:12:16.724423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.466 [2024-07-24 23:12:16.724439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.466 [2024-07-24 23:12:16.724448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.466 [2024-07-24 23:12:16.724457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.466 [2024-07-24 23:12:16.724466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.466 [2024-07-24 23:12:16.724474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.724483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.724493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.724501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.724548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.724551] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:44.467 [2024-07-24 23:12:16.724603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.724728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.724841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.724899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.724964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.725025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.725081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.725138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.725195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.725250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.725308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.725365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.725429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.725484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.725551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.725607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.725663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.725740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.725814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.467 [2024-07-24 23:12:16.725844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.725911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.467 [2024-07-24 23:12:16.725964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.726030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.467 [2024-07-24 23:12:16.726088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.726148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.467 [2024-07-24 23:12:16.726204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.726266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.467 [2024-07-24 23:12:16.726319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.726380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.467 [2024-07-24 23:12:16.726436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.726505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.467 [2024-07-24 23:12:16.726561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.726622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.467 [2024-07-24 23:12:16.726678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.726783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.467 [2024-07-24 23:12:16.726924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.727002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.467 [2024-07-24 23:12:16.727059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.727130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.467 [2024-07-24 23:12:16.727185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.727246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.467 [2024-07-24 23:12:16.727303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.727363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.467 [2024-07-24 23:12:16.727416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.727476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.467 [2024-07-24 23:12:16.727538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.727598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.467 [2024-07-24 23:12:16.727653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.727711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.467 [2024-07-24 23:12:16.727835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.727897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.467 [2024-07-24 23:12:16.727952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.728018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.467 [2024-07-24 23:12:16.728074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.728134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.467 [2024-07-24 23:12:16.728188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.467 [2024-07-24 23:12:16.728247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.467 [2024-07-24 23:12:16.728304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.467 [2024-07-24 23:12:16.728361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.467 [2024-07-24 23:12:16.728427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.467 [2024-07-24 23:12:16.728484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.467 [2024-07-24 23:12:16.728542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.467 [2024-07-24 23:12:16.728601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.467 [2024-07-24 23:12:16.728658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.467 [2024-07-24 23:12:16.728720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.728789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.728852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.728912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.728978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.729016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.729053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.729091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.729128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.729168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.729209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.729246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.729283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.729321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.729362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.729399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.729435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.729473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.729507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.729545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.729586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.729630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.729667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.729705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.729780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.729829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.729871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.729911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.729949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.729986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.730030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.730068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.730103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.730152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.730188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.730226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.730259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.730298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.730334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.730372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.730406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.730444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.730478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.730515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.730553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.730591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.730626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.730662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.730696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.730761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.730846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.730904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.730940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.730976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.731009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.731046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.731079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.731122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.731154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.731191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.731224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.731260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.731294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.731330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.468 [2024-07-24 23:12:16.731363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.468 [2024-07-24 23:12:16.731400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-07-24 23:12:16.731433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.469 [2024-07-24 23:12:16.731470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-07-24 23:12:16.731502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.469 [2024-07-24 23:12:16.731537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-07-24 23:12:16.731576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.469 [2024-07-24 23:12:16.731614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-07-24 23:12:16.731647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.469 [2024-07-24 23:12:16.731685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-07-24 23:12:16.731720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.469 [2024-07-24 23:12:16.739512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.739527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.739539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.739550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.739562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.739573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.739585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.739597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.739608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.739619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.739631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.739642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.739654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.739666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.739677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.739689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcab0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.740994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fcf40 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.469 [2024-07-24 23:12:16.741626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.741993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.742001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.742010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd3d0 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.745388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-07-24 23:12:16.745420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.470 [2024-07-24 23:12:16.745438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-07-24 23:12:16.745451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.470 [2024-07-24 23:12:16.745466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-07-24 23:12:16.745478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.470 [2024-07-24 23:12:16.745492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-07-24 23:12:16.745504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.470 [2024-07-24 23:12:16.745519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-07-24 23:12:16.745531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.470 [2024-07-24 23:12:16.745550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-07-24 23:12:16.745562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.470 [2024-07-24 23:12:16.745577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-07-24 23:12:16.745589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.470 [2024-07-24 23:12:16.745603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-07-24 23:12:16.745615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.470 [2024-07-24 23:12:16.745629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-07-24 23:12:16.745642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.470 [2024-07-24 23:12:16.745656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-07-24 23:12:16.745668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.470 [2024-07-24 23:12:16.745682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-07-24 23:12:16.745694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.470 [2024-07-24 23:12:16.745708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-07-24 23:12:16.745742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.470 [2024-07-24 23:12:16.745756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-07-24 23:12:16.745768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.470 [2024-07-24 23:12:16.745782] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6d460 is same with the state(5) to be set 00:26:44.470 [2024-07-24 23:12:16.746178] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c6d460 was disconnected and freed. reset controller. 00:26:44.470 [2024-07-24 23:12:16.746203] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:44.470 [2024-07-24 23:12:16.746263] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c70120 (9): Bad file descriptor 00:26:44.470 [2024-07-24 23:12:16.746325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad5aa0 (9): Bad file descriptor 00:26:44.471 [2024-07-24 23:12:16.746351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aadc10 (9): Bad file descriptor 00:26:44.471 [2024-07-24 23:12:16.746388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.471 [2024-07-24 23:12:16.746402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.746415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.471 [2024-07-24 23:12:16.746431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.746444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.471 [2024-07-24 23:12:16.746456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.746469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.471 [2024-07-24 23:12:16.746481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.746493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab9980 is same with the state(5) to be set 00:26:44.471 [2024-07-24 23:12:16.746524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.471 [2024-07-24 23:12:16.746538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.746551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.471 [2024-07-24 23:12:16.746563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.746576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.471 [2024-07-24 23:12:16.746588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.746600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.471 [2024-07-24 23:12:16.746613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.746624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b07ee0 is same with the state(5) to be set 00:26:44.471 [2024-07-24 23:12:16.746647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab8560 (9): Bad file descriptor 00:26:44.471 [2024-07-24 23:12:16.746679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.471 [2024-07-24 23:12:16.746693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.746706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.471 [2024-07-24 23:12:16.746727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.746740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.471 [2024-07-24 23:12:16.746752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.746765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.471 [2024-07-24 23:12:16.746777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.746789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b06340 is same with the state(5) to be set 00:26:44.471 [2024-07-24 23:12:16.746827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.471 [2024-07-24 23:12:16.746843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.746856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.471 [2024-07-24 23:12:16.746868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.746881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.471 [2024-07-24 23:12:16.746893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.746906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.471 [2024-07-24 23:12:16.746918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.746929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a05a40 is same with the state(5) to be set 00:26:44.471 [2024-07-24 23:12:16.746952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aff860 (9): Bad file descriptor 00:26:44.471 [2024-07-24 23:12:16.746971] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aaaeb0 (9): Bad file descriptor 00:26:44.471 [2024-07-24 23:12:16.747086] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:44.471 [2024-07-24 23:12:16.747146] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:44.471 [2024-07-24 23:12:16.748468] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:44.471 [2024-07-24 23:12:16.748546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-07-24 23:12:16.748562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.748579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-07-24 23:12:16.748592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.748606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-07-24 23:12:16.748619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.748634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-07-24 23:12:16.748646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.748660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-07-24 23:12:16.748672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.748687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-07-24 23:12:16.748699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.748713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-07-24 23:12:16.748735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.748750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-07-24 23:12:16.748762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.748777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-07-24 23:12:16.748789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.748804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-07-24 23:12:16.748815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.748830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-07-24 23:12:16.748842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.748857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-07-24 23:12:16.748869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.748883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-07-24 23:12:16.748896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.471 [2024-07-24 23:12:16.748910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.748922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.748936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.748949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.748963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.748987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-07-24 23:12:16.749704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.472 [2024-07-24 23:12:16.749723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.749736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.749751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.749763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.749781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.749794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.749809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.749821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.749836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.749849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.749864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.749877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.749892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.749905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.749919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.749932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.749947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.749960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.749974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.749987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.750002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.750014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.750029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.750042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.750056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.750069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.750084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.750096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.750111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.750126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.750141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.750153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.750168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.750181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.750195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.750208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.750223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.750236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.750250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.750263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.750278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.750291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.750305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-07-24 23:12:16.750318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.473 [2024-07-24 23:12:16.750332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bea8b0 is same with the state(5) to be set 00:26:44.473 [2024-07-24 23:12:16.750393] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bea8b0 was disconnected and freed. reset controller. 00:26:44.473 [2024-07-24 23:12:16.751352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.473 [2024-07-24 23:12:16.751571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.473 [2024-07-24 23:12:16.751587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c70120 with addr=10.0.0.2, port=4420 00:26:44.473 [2024-07-24 23:12:16.751601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70120 is same with the state(5) to be set 00:26:44.473 [2024-07-24 23:12:16.751814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.473 [2024-07-24 23:12:16.752102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.473 [2024-07-24 23:12:16.752117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aff860 with addr=10.0.0.2, port=4420 00:26:44.473 [2024-07-24 23:12:16.752130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff860 is same with the state(5) to be set 00:26:44.473 [2024-07-24 23:12:16.752143] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.473 [2024-07-24 23:12:16.752156] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.473 [2024-07-24 23:12:16.752170] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.473 [2024-07-24 23:12:16.753954] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.473 [2024-07-24 23:12:16.753982] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:44.473 [2024-07-24 23:12:16.754015] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c70120 (9): Bad file descriptor 00:26:44.473 [2024-07-24 23:12:16.754034] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aff860 (9): Bad file descriptor 00:26:44.473 [2024-07-24 23:12:16.754863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.473 [2024-07-24 23:12:16.755112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.473 [2024-07-24 23:12:16.755129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aadc10 with addr=10.0.0.2, port=4420 00:26:44.473 [2024-07-24 23:12:16.755144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aadc10 is same with the state(5) to be set 00:26:44.473 [2024-07-24 23:12:16.755158] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:44.473 [2024-07-24 23:12:16.755171] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:44.473 [2024-07-24 23:12:16.755184] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:44.473 [2024-07-24 23:12:16.755203] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:44.473 [2024-07-24 23:12:16.755215] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:44.473 [2024-07-24 23:12:16.755228] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:44.473 [2024-07-24 23:12:16.755581] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:44.473 [2024-07-24 23:12:16.755637] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:44.473 [2024-07-24 23:12:16.755692] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:44.473 [2024-07-24 23:12:16.755727] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.473 [2024-07-24 23:12:16.755740] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.474 [2024-07-24 23:12:16.755756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aadc10 (9): Bad file descriptor 00:26:44.474 [2024-07-24 23:12:16.755842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.755860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.755878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.755892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.755907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.755919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.755934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.755947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.755962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.755979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.755994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.474 [2024-07-24 23:12:16.756890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-07-24 23:12:16.756903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.756917] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6a8e0 is same with the state(5) to be set 00:26:44.475 [2024-07-24 23:12:16.757000] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c6a8e0 was disconnected and freed. reset controller. 00:26:44.475 [2024-07-24 23:12:16.757040] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:44.475 [2024-07-24 23:12:16.757053] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:44.475 [2024-07-24 23:12:16.757066] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:44.475 [2024-07-24 23:12:16.757095] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab9980 (9): Bad file descriptor 00:26:44.475 [2024-07-24 23:12:16.757119] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b07ee0 (9): Bad file descriptor 00:26:44.475 [2024-07-24 23:12:16.757149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b06340 (9): Bad file descriptor 00:26:44.475 [2024-07-24 23:12:16.757175] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a05a40 (9): Bad file descriptor 00:26:44.475 [2024-07-24 23:12:16.758315] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.475 [2024-07-24 23:12:16.758345] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:44.475 [2024-07-24 23:12:16.758416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.758432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.758449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.758462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.758476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.758489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.758504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.758517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.758532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.758545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.758559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.758572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.758587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.758600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.758614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.758627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.758642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.758655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.758669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.758682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.758697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.758709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.758730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.758743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.758762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.758774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.758789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.758801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.758816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.758829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.758844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.758857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.758871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.758884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.758899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.758912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.758926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.758939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.758954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.758966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.758981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.758994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.759019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.759030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.759042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.759052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.759064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.759074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.759086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.475 [2024-07-24 23:12:16.759098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.475 [2024-07-24 23:12:16.759111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.476 [2024-07-24 23:12:16.759988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.476 [2024-07-24 23:12:16.759999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be9390 is same with the state(5) to be set 00:26:44.477 [2024-07-24 23:12:16.761041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.477 [2024-07-24 23:12:16.761752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.477 [2024-07-24 23:12:16.761762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.761773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.761784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.761796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.761806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.761818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.761828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.761842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.761853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.761865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.761875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.761887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.761897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.761909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.761919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.761932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.761942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.761954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.761964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.761976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.761987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.761999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.478 [2024-07-24 23:12:16.762481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.478 [2024-07-24 23:12:16.762491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1beb730 is same with the state(5) to be set 00:26:44.478 [2024-07-24 23:12:16.763529] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.478 [2024-07-24 23:12:16.763548] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:44.478 [2024-07-24 23:12:16.763560] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:44.478 [2024-07-24 23:12:16.763843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.478 [2024-07-24 23:12:16.764095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.478 [2024-07-24 23:12:16.764110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a05a40 with addr=10.0.0.2, port=4420 00:26:44.478 [2024-07-24 23:12:16.764121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a05a40 is same with the state(5) to be set 00:26:44.478 [2024-07-24 23:12:16.764707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.478 [2024-07-24 23:12:16.764961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.478 [2024-07-24 23:12:16.764975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aaaeb0 with addr=10.0.0.2, port=4420 00:26:44.478 [2024-07-24 23:12:16.764985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaeb0 is same with the state(5) to be set 00:26:44.478 [2024-07-24 23:12:16.765178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.478 [2024-07-24 23:12:16.765287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.478 [2024-07-24 23:12:16.765300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab8560 with addr=10.0.0.2, port=4420 00:26:44.478 [2024-07-24 23:12:16.765310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8560 is same with the state(5) to be set 00:26:44.478 [2024-07-24 23:12:16.765571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.478 [2024-07-24 23:12:16.765747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.479 [2024-07-24 23:12:16.765761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad5aa0 with addr=10.0.0.2, port=4420 00:26:44.479 [2024-07-24 23:12:16.765771] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad5aa0 is same with the state(5) to be set 00:26:44.479 [2024-07-24 23:12:16.765785] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a05a40 (9): Bad file descriptor 00:26:44.479 [2024-07-24 23:12:16.766284] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:44.479 [2024-07-24 23:12:16.766305] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:44.479 [2024-07-24 23:12:16.766335] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aaaeb0 (9): Bad file descriptor 00:26:44.479 [2024-07-24 23:12:16.766349] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab8560 (9): Bad file descriptor 00:26:44.479 [2024-07-24 23:12:16.766362] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad5aa0 (9): Bad file descriptor 00:26:44.479 [2024-07-24 23:12:16.766373] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:44.479 [2024-07-24 23:12:16.766383] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:44.479 [2024-07-24 23:12:16.766394] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:44.479 [2024-07-24 23:12:16.766445] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:44.479 [2024-07-24 23:12:16.766458] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.479 [2024-07-24 23:12:16.766778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.479 [2024-07-24 23:12:16.767019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.479 [2024-07-24 23:12:16.767034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aff860 with addr=10.0.0.2, port=4420 00:26:44.479 [2024-07-24 23:12:16.767044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff860 is same with the state(5) to be set 00:26:44.479 [2024-07-24 23:12:16.767228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.479 [2024-07-24 23:12:16.767470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.479 [2024-07-24 23:12:16.767484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c70120 with addr=10.0.0.2, port=4420 00:26:44.479 [2024-07-24 23:12:16.767494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70120 is same with the state(5) to be set 00:26:44.479 [2024-07-24 23:12:16.767505] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.479 [2024-07-24 23:12:16.767514] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.479 [2024-07-24 23:12:16.767525] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.479 [2024-07-24 23:12:16.767538] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:44.479 [2024-07-24 23:12:16.767548] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:44.479 [2024-07-24 23:12:16.767558] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:44.479 [2024-07-24 23:12:16.767570] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:44.479 [2024-07-24 23:12:16.767580] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:44.479 [2024-07-24 23:12:16.767589] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:44.479 [2024-07-24 23:12:16.767635] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.479 [2024-07-24 23:12:16.767645] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.479 [2024-07-24 23:12:16.767654] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.479 [2024-07-24 23:12:16.767941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.479 [2024-07-24 23:12:16.768111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.479 [2024-07-24 23:12:16.768129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aadc10 with addr=10.0.0.2, port=4420 00:26:44.479 [2024-07-24 23:12:16.768140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aadc10 is same with the state(5) to be set 00:26:44.479 [2024-07-24 23:12:16.768153] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aff860 (9): Bad file descriptor 00:26:44.479 [2024-07-24 23:12:16.768166] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c70120 (9): Bad file descriptor 00:26:44.479 [2024-07-24 23:12:16.768235] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aadc10 (9): Bad file descriptor 00:26:44.479 [2024-07-24 23:12:16.768249] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:44.479 [2024-07-24 23:12:16.768259] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:44.479 [2024-07-24 23:12:16.768270] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:44.479 [2024-07-24 23:12:16.768282] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:44.479 [2024-07-24 23:12:16.768292] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:44.479 [2024-07-24 23:12:16.768302] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:44.479 [2024-07-24 23:12:16.768352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.479 [2024-07-24 23:12:16.768365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.479 [2024-07-24 23:12:16.768379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.479 [2024-07-24 23:12:16.768390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.479 [2024-07-24 23:12:16.768402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.479 [2024-07-24 23:12:16.768413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.479 [2024-07-24 23:12:16.768426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.479 [2024-07-24 23:12:16.768436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.479 [2024-07-24 23:12:16.768448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.479 [2024-07-24 23:12:16.768458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.479 [2024-07-24 23:12:16.768470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.479 [2024-07-24 23:12:16.768481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.479 [2024-07-24 23:12:16.768493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.479 [2024-07-24 23:12:16.768503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.479 [2024-07-24 23:12:16.768515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.479 [2024-07-24 23:12:16.768529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.479 [2024-07-24 23:12:16.768541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.479 [2024-07-24 23:12:16.768551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.479 [2024-07-24 23:12:16.768563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.479 [2024-07-24 23:12:16.768574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.479 [2024-07-24 23:12:16.768586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.479 [2024-07-24 23:12:16.768597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.479 [2024-07-24 23:12:16.768609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.479 [2024-07-24 23:12:16.768619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.479 [2024-07-24 23:12:16.768631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.479 [2024-07-24 23:12:16.768642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.479 [2024-07-24 23:12:16.768654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.479 [2024-07-24 23:12:16.768664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.479 [2024-07-24 23:12:16.768676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.479 [2024-07-24 23:12:16.768686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.479 [2024-07-24 23:12:16.768698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.479 [2024-07-24 23:12:16.768709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.479 [2024-07-24 23:12:16.768727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.479 [2024-07-24 23:12:16.768737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.479 [2024-07-24 23:12:16.768749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.479 [2024-07-24 23:12:16.768759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.479 [2024-07-24 23:12:16.768771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.479 [2024-07-24 23:12:16.768781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.479 [2024-07-24 23:12:16.768793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.768804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.768816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.768829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.768841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.768852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.768863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.768874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.768887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.768897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.768909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.768919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.768931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.768941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.768954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.768964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.768976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.768987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.480 [2024-07-24 23:12:16.769611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.480 [2024-07-24 23:12:16.769624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.769633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.769643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.769652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.769663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.769671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.769682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.769691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.769702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.769711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.769729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.769738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.769748] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c66740 is same with the state(5) to be set 00:26:44.481 [2024-07-24 23:12:16.770685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.770701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.770720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.770730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.770741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.770750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.770761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.770770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.770781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.770790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.770801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.770810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.770826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.770835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.770846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.770855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.770866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.770875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.770886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.770895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.770906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.770915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.770926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.770935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.770946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.770955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.770966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.770976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.770987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.770996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.771007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.771016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.771027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.771036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.771047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.771056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.771066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.771078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.771088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.771097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.771108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.771117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.771128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.771137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.771149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.771158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.771169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.771178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.771189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.771199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.771210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.481 [2024-07-24 23:12:16.771219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.481 [2024-07-24 23:12:16.771230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.771984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.771994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69300 is same with the state(5) to be set 00:26:44.482 [2024-07-24 23:12:16.772925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.482 [2024-07-24 23:12:16.772940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.482 [2024-07-24 23:12:16.772952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.772961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.772972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.772981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.772992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.483 [2024-07-24 23:12:16.773735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.483 [2024-07-24 23:12:16.773744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.773754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.773764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.773775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.773784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.773795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.773804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.773814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.773823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.773834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.773843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.773854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.773863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.773874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.773882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.773894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.773902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.773913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.773923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.773934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.773943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.773954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.773963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.773974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.773983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.773993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.774002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.774014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.774024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.774035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.774044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.774054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.774063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.774074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.774083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.774094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.774103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.774114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.774123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.774134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.774143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.774153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.774162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.774173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.774182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.774193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.484 [2024-07-24 23:12:16.774202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.484 [2024-07-24 23:12:16.774212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6be80 is same with the state(5) to be set 00:26:44.484 [2024-07-24 23:12:16.775890] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.484 [2024-07-24 23:12:16.775907] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.484 [2024-07-24 23:12:16.775916] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:44.484 [2024-07-24 23:12:16.775928] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:44.484 task offset: 34816 on job bdev=Nvme1n1 fails 00:26:44.484 00:26:44.484 Latency(us) 00:26:44.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.484 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:44.484 Job: Nvme1n1 ended in about 0.63 seconds with error 00:26:44.484 Verification LBA range: start 0x0 length 0x400 00:26:44.484 Nvme1n1 : 0.63 401.40 25.09 102.35 0.00 126228.58 37119.59 121634.82 00:26:44.484 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:44.484 Job: Nvme2n1 ended in about 0.67 seconds with error 00:26:44.484 Verification LBA range: start 0x0 length 0x400 00:26:44.484 Nvme2n1 : 0.67 383.45 23.97 96.24 0.00 131366.85 25165.82 129184.56 00:26:44.484 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:44.484 Job: Nvme3n1 ended in about 0.66 seconds with error 00:26:44.484 Verification LBA range: start 0x0 length 0x400 00:26:44.484 Nvme3n1 : 0.66 442.72 27.67 97.37 0.00 115528.85 45508.20 96468.99 00:26:44.484 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:44.484 Job: Nvme4n1 ended in about 0.67 seconds with error 00:26:44.484 Verification LBA range: start 0x0 length 0x400 00:26:44.484 Nvme4n1 : 0.67 446.44 27.90 95.88 0.00 114013.23 34393.29 91016.40 00:26:44.484 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:44.484 Job: Nvme5n1 ended in about 0.67 seconds with error 00:26:44.484 Verification LBA range: start 0x0 length 0x400 00:26:44.484 Nvme5n1 : 0.67 373.49 23.34 94.85 0.00 130843.70 8650.75 130023.42 00:26:44.484 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:44.484 Job: Nvme6n1 ended in about 0.63 seconds with error 00:26:44.484 Verification LBA range: start 0x0 length 0x400 00:26:44.484 Nvme6n1 : 0.63 461.99 28.87 101.61 0.00 107279.59 12268.34 93113.55 00:26:44.484 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:44.484 Job: Nvme7n1 ended in about 0.68 seconds with error 00:26:44.484 Verification LBA range: start 0x0 length 0x400 00:26:44.484 Nvme7n1 : 0.68 370.78 23.17 94.54 0.00 129244.48 73819.75 103179.88 00:26:44.484 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:44.484 Job: Nvme8n1 ended in about 0.66 seconds with error 00:26:44.484 Verification LBA range: start 0x0 length 0x400 00:26:44.484 Nvme8n1 : 0.66 439.44 27.47 57.38 0.00 116858.74 3827.30 117440.51 00:26:44.484 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:44.484 Job: Nvme9n1 ended in about 0.68 seconds with error 00:26:44.484 Verification LBA range: start 0x0 length 0x400 00:26:44.484 Nvme9n1 : 0.68 428.46 26.78 94.23 0.00 112842.14 46976.20 119118.23 00:26:44.484 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:44.484 Job: Nvme10n1 ended in about 0.65 seconds with error 00:26:44.484 Verification LBA range: start 0x0 length 0x400 00:26:44.484 Nvme10n1 : 0.65 387.79 24.24 98.10 0.00 119741.76 27892.12 125829.12 00:26:44.484 =================================================================================================================== 00:26:44.484 Total : 4135.96 258.50 932.55 0.00 120000.02 3827.30 130023.42 00:26:44.484 [2024-07-24 23:12:16.797767] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:44.484 [2024-07-24 23:12:16.797806] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:26:44.484 [2024-07-24 23:12:16.797854] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:44.485 [2024-07-24 23:12:16.797864] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:44.485 [2024-07-24 23:12:16.797875] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:44.485 [2024-07-24 23:12:16.797989] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.485 [2024-07-24 23:12:16.798354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.485 [2024-07-24 23:12:16.798614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.485 [2024-07-24 23:12:16.798628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab9980 with addr=10.0.0.2, port=4420 00:26:44.485 [2024-07-24 23:12:16.798641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab9980 is same with the state(5) to be set 00:26:44.485 [2024-07-24 23:12:16.798885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.485 [2024-07-24 23:12:16.799126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.485 [2024-07-24 23:12:16.799138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b07ee0 with addr=10.0.0.2, port=4420 00:26:44.485 [2024-07-24 23:12:16.799147] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b07ee0 is same with the state(5) to be set 00:26:44.485 [2024-07-24 23:12:16.799335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.485 [2024-07-24 23:12:16.799522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.485 [2024-07-24 23:12:16.799535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b06340 with addr=10.0.0.2, port=4420 00:26:44.485 [2024-07-24 23:12:16.799544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b06340 is same with the state(5) to be set 00:26:44.485 [2024-07-24 23:12:16.799587] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:44.485 [2024-07-24 23:12:16.799602] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:44.485 [2024-07-24 23:12:16.799614] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:44.485 [2024-07-24 23:12:16.799626] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:44.485 [2024-07-24 23:12:16.800270] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:44.485 [2024-07-24 23:12:16.800285] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:44.485 [2024-07-24 23:12:16.800296] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:44.485 [2024-07-24 23:12:16.800306] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.485 [2024-07-24 23:12:16.800364] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab9980 (9): Bad file descriptor 00:26:44.485 [2024-07-24 23:12:16.800378] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b07ee0 (9): Bad file descriptor 00:26:44.485 [2024-07-24 23:12:16.800390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b06340 (9): Bad file descriptor 00:26:44.485 [2024-07-24 23:12:16.800437] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:44.485 [2024-07-24 23:12:16.800449] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:44.485 [2024-07-24 23:12:16.800459] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:44.485 [2024-07-24 23:12:16.800651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.485 [2024-07-24 23:12:16.800843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.485 [2024-07-24 23:12:16.800856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a05a40 with addr=10.0.0.2, port=4420 00:26:44.485 [2024-07-24 23:12:16.800865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a05a40 is same with the state(5) to be set 00:26:44.485 [2024-07-24 23:12:16.801183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.485 [2024-07-24 23:12:16.801282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.485 [2024-07-24 23:12:16.801297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad5aa0 with addr=10.0.0.2, port=4420 00:26:44.485 [2024-07-24 23:12:16.801306] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad5aa0 is same with the state(5) to be set 00:26:44.485 [2024-07-24 23:12:16.801493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.485 [2024-07-24 23:12:16.801742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.485 [2024-07-24 23:12:16.801755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab8560 with addr=10.0.0.2, port=4420 00:26:44.485 [2024-07-24 23:12:16.801764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8560 is same with the state(5) to be set 00:26:44.485 [2024-07-24 23:12:16.801984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.485 [2024-07-24 23:12:16.802176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.485 [2024-07-24 23:12:16.802189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aaaeb0 with addr=10.0.0.2, port=4420 00:26:44.485 [2024-07-24 23:12:16.802198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaeb0 is same with the state(5) to be set 00:26:44.485 [2024-07-24 23:12:16.802208] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:44.485 [2024-07-24 23:12:16.802216] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:44.485 [2024-07-24 23:12:16.802226] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:44.485 [2024-07-24 23:12:16.802239] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:26:44.485 [2024-07-24 23:12:16.802247] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:26:44.485 [2024-07-24 23:12:16.802256] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:44.485 [2024-07-24 23:12:16.802267] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:26:44.485 [2024-07-24 23:12:16.802275] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:26:44.485 [2024-07-24 23:12:16.802284] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:26:44.485 [2024-07-24 23:12:16.802338] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.485 [2024-07-24 23:12:16.802348] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.485 [2024-07-24 23:12:16.802356] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.485 [2024-07-24 23:12:16.802656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.485 [2024-07-24 23:12:16.802879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.485 [2024-07-24 23:12:16.802892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c70120 with addr=10.0.0.2, port=4420 00:26:44.485 [2024-07-24 23:12:16.802901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70120 is same with the state(5) to be set 00:26:44.485 [2024-07-24 23:12:16.803084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.485 [2024-07-24 23:12:16.803369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.485 [2024-07-24 23:12:16.803381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aff860 with addr=10.0.0.2, port=4420 00:26:44.485 [2024-07-24 23:12:16.803390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff860 is same with the state(5) to be set 00:26:44.485 [2024-07-24 23:12:16.803677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.485 [2024-07-24 23:12:16.803914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.485 [2024-07-24 23:12:16.803929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aadc10 with addr=10.0.0.2, port=4420 00:26:44.485 [2024-07-24 23:12:16.803939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aadc10 is same with the state(5) to be set 00:26:44.485 [2024-07-24 23:12:16.803950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a05a40 (9): Bad file descriptor 00:26:44.485 [2024-07-24 23:12:16.803962] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad5aa0 (9): Bad file descriptor 00:26:44.485 [2024-07-24 23:12:16.803974] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab8560 (9): Bad file descriptor 00:26:44.485 [2024-07-24 23:12:16.803984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aaaeb0 (9): Bad file descriptor 00:26:44.485 [2024-07-24 23:12:16.804023] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c70120 (9): Bad file descriptor 00:26:44.485 [2024-07-24 23:12:16.804036] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aff860 (9): Bad file descriptor 00:26:44.485 [2024-07-24 23:12:16.804048] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aadc10 (9): Bad file descriptor 00:26:44.485 [2024-07-24 23:12:16.804058] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:44.485 [2024-07-24 23:12:16.804066] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:44.485 [2024-07-24 23:12:16.804076] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:44.485 [2024-07-24 23:12:16.804086] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:44.485 [2024-07-24 23:12:16.804095] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:44.485 [2024-07-24 23:12:16.804104] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:44.485 [2024-07-24 23:12:16.804114] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:44.485 [2024-07-24 23:12:16.804122] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:44.485 [2024-07-24 23:12:16.804131] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:44.485 [2024-07-24 23:12:16.804141] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.485 [2024-07-24 23:12:16.804149] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.485 [2024-07-24 23:12:16.804158] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.485 [2024-07-24 23:12:16.804186] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.485 [2024-07-24 23:12:16.804195] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.485 [2024-07-24 23:12:16.804202] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.485 [2024-07-24 23:12:16.804210] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.485 [2024-07-24 23:12:16.804217] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:44.485 [2024-07-24 23:12:16.804226] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:44.486 [2024-07-24 23:12:16.804234] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:44.486 [2024-07-24 23:12:16.804244] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:44.486 [2024-07-24 23:12:16.804252] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:44.486 [2024-07-24 23:12:16.804263] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:44.486 [2024-07-24 23:12:16.804273] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:44.486 [2024-07-24 23:12:16.804282] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:44.486 [2024-07-24 23:12:16.804290] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:44.486 [2024-07-24 23:12:16.804315] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.486 [2024-07-24 23:12:16.804323] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.486 [2024-07-24 23:12:16.804331] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.745 23:12:17 -- target/shutdown.sh@135 -- # nvmfpid= 00:26:44.745 23:12:17 -- target/shutdown.sh@138 -- # sleep 1 00:26:46.125 23:12:18 -- target/shutdown.sh@141 -- # kill -9 3326731 00:26:46.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (3326731) - No such process 00:26:46.125 23:12:18 -- target/shutdown.sh@141 -- # true 00:26:46.125 23:12:18 -- target/shutdown.sh@143 -- # stoptarget 00:26:46.125 23:12:18 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:46.125 23:12:18 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:46.125 23:12:18 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:46.125 23:12:18 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:46.125 23:12:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:46.125 23:12:18 -- nvmf/common.sh@116 -- # sync 00:26:46.125 23:12:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:46.125 23:12:18 -- nvmf/common.sh@119 -- # set +e 00:26:46.125 23:12:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:46.125 23:12:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:46.125 rmmod nvme_tcp 00:26:46.125 rmmod nvme_fabrics 00:26:46.125 rmmod nvme_keyring 00:26:46.125 23:12:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:46.125 23:12:18 -- nvmf/common.sh@123 -- # set -e 00:26:46.125 23:12:18 -- nvmf/common.sh@124 -- # return 0 00:26:46.125 23:12:18 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:26:46.125 23:12:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:46.125 23:12:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:46.125 23:12:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:46.125 23:12:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:46.125 23:12:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:46.125 23:12:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.125 23:12:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:46.125 23:12:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.032 23:12:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:48.032 00:26:48.032 real 0m7.746s 00:26:48.032 user 0m18.584s 00:26:48.032 sys 0m1.572s 00:26:48.032 23:12:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:48.032 23:12:20 -- common/autotest_common.sh@10 -- # set +x 00:26:48.032 ************************************ 00:26:48.032 END TEST nvmf_shutdown_tc3 00:26:48.032 ************************************ 00:26:48.032 23:12:20 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:26:48.032 00:26:48.032 real 0m32.781s 00:26:48.032 user 1m18.011s 00:26:48.032 sys 0m10.493s 00:26:48.032 23:12:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:48.032 23:12:20 -- common/autotest_common.sh@10 -- # set +x 00:26:48.032 ************************************ 00:26:48.032 END TEST nvmf_shutdown 00:26:48.032 ************************************ 00:26:48.032 23:12:20 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:26:48.032 23:12:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:48.032 23:12:20 -- common/autotest_common.sh@10 -- # set +x 00:26:48.032 23:12:20 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:26:48.032 23:12:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:48.032 23:12:20 -- common/autotest_common.sh@10 -- # set +x 00:26:48.032 23:12:20 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:26:48.032 23:12:20 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:48.032 23:12:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:48.032 23:12:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:48.032 23:12:20 -- common/autotest_common.sh@10 -- # set +x 00:26:48.032 ************************************ 00:26:48.032 START TEST nvmf_multicontroller 00:26:48.032 ************************************ 00:26:48.032 23:12:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:48.292 * Looking for test storage... 00:26:48.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:48.292 23:12:20 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:48.292 23:12:20 -- nvmf/common.sh@7 -- # uname -s 00:26:48.292 23:12:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.292 23:12:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.292 23:12:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.292 23:12:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.292 23:12:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.292 23:12:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.292 23:12:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.292 23:12:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.292 23:12:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.292 23:12:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.292 23:12:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:48.292 23:12:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:48.292 23:12:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.292 23:12:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.292 23:12:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:48.292 23:12:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:48.292 23:12:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.292 23:12:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.292 23:12:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.292 23:12:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.292 23:12:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.292 23:12:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.292 23:12:20 -- paths/export.sh@5 -- # export PATH 00:26:48.292 23:12:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.292 23:12:20 -- nvmf/common.sh@46 -- # : 0 00:26:48.292 23:12:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:48.292 23:12:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:48.292 23:12:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:48.292 23:12:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.292 23:12:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.292 23:12:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:48.292 23:12:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:48.292 23:12:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:48.292 23:12:20 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:48.292 23:12:20 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:48.292 23:12:20 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:48.292 23:12:20 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:48.292 23:12:20 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:48.292 23:12:20 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:48.292 23:12:20 -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:48.292 23:12:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:48.292 23:12:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.292 23:12:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:48.292 23:12:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:48.292 23:12:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:48.292 23:12:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.292 23:12:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:48.292 23:12:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.292 23:12:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:48.292 23:12:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:48.292 23:12:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:48.292 23:12:20 -- common/autotest_common.sh@10 -- # set +x 00:26:54.866 23:12:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:54.866 23:12:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:54.866 23:12:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:54.866 23:12:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:54.866 23:12:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:54.866 23:12:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:54.866 23:12:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:54.866 23:12:26 -- nvmf/common.sh@294 -- # net_devs=() 00:26:54.866 23:12:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:54.866 23:12:26 -- nvmf/common.sh@295 -- # e810=() 00:26:54.866 23:12:26 -- nvmf/common.sh@295 -- # local -ga e810 00:26:54.866 23:12:26 -- nvmf/common.sh@296 -- # x722=() 00:26:54.866 23:12:26 -- nvmf/common.sh@296 -- # local -ga x722 00:26:54.866 23:12:26 -- nvmf/common.sh@297 -- # mlx=() 00:26:54.866 23:12:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:54.866 23:12:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.866 23:12:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.866 23:12:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.866 23:12:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.866 23:12:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.866 23:12:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.866 23:12:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.866 23:12:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.866 23:12:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.866 23:12:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.866 23:12:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.866 23:12:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:54.866 23:12:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:54.866 23:12:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:54.866 23:12:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:54.866 23:12:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:54.866 23:12:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:54.866 23:12:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:54.866 23:12:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:54.866 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:54.866 23:12:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:54.866 23:12:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:54.866 23:12:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.866 23:12:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.866 23:12:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:54.866 23:12:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:54.866 23:12:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:54.866 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:54.866 23:12:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:54.866 23:12:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:54.866 23:12:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.866 23:12:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.866 23:12:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:54.866 23:12:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:54.866 23:12:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:54.866 23:12:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:54.866 23:12:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:54.866 23:12:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.866 23:12:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:54.866 23:12:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.866 23:12:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:54.866 Found net devices under 0000:af:00.0: cvl_0_0 00:26:54.866 23:12:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.866 23:12:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:54.866 23:12:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.866 23:12:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:54.866 23:12:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.866 23:12:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:54.866 Found net devices under 0000:af:00.1: cvl_0_1 00:26:54.866 23:12:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.866 23:12:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:54.866 23:12:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:54.866 23:12:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:54.866 23:12:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:54.866 23:12:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:54.866 23:12:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:54.866 23:12:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:54.866 23:12:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:54.866 23:12:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:54.866 23:12:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:54.866 23:12:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:54.866 23:12:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:54.866 23:12:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:54.866 23:12:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.866 23:12:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:54.866 23:12:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:54.866 23:12:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:54.866 23:12:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:54.866 23:12:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:54.866 23:12:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:54.866 23:12:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:54.866 23:12:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:54.866 23:12:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:54.866 23:12:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:54.866 23:12:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:54.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:54.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:26:54.866 00:26:54.866 --- 10.0.0.2 ping statistics --- 00:26:54.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.866 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:26:54.866 23:12:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:54.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:54.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:26:54.866 00:26:54.866 --- 10.0.0.1 ping statistics --- 00:26:54.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.866 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:26:54.866 23:12:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:54.866 23:12:27 -- nvmf/common.sh@410 -- # return 0 00:26:54.866 23:12:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:54.866 23:12:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.866 23:12:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:54.866 23:12:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:54.866 23:12:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.866 23:12:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:54.866 23:12:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:54.866 23:12:27 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:54.866 23:12:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:54.866 23:12:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:54.866 23:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:54.866 23:12:27 -- nvmf/common.sh@469 -- # nvmfpid=3331030 00:26:54.866 23:12:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:54.866 23:12:27 -- nvmf/common.sh@470 -- # waitforlisten 3331030 00:26:54.866 23:12:27 -- common/autotest_common.sh@819 -- # '[' -z 3331030 ']' 00:26:54.866 23:12:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.866 23:12:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:54.866 23:12:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.866 23:12:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:54.866 23:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:54.866 [2024-07-24 23:12:27.205792] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:54.866 [2024-07-24 23:12:27.205842] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.866 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.866 [2024-07-24 23:12:27.279286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:55.127 [2024-07-24 23:12:27.317039] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:55.127 [2024-07-24 23:12:27.317150] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.127 [2024-07-24 23:12:27.317160] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.127 [2024-07-24 23:12:27.317169] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.127 [2024-07-24 23:12:27.317273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:55.127 [2024-07-24 23:12:27.317357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:55.127 [2024-07-24 23:12:27.317358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.696 23:12:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:55.696 23:12:28 -- common/autotest_common.sh@852 -- # return 0 00:26:55.696 23:12:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:55.696 23:12:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:55.696 23:12:28 -- common/autotest_common.sh@10 -- # set +x 00:26:55.696 23:12:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.696 23:12:28 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:55.696 23:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.696 23:12:28 -- common/autotest_common.sh@10 -- # set +x 00:26:55.696 [2024-07-24 23:12:28.059546] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.696 23:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.696 23:12:28 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:55.696 23:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.696 23:12:28 -- common/autotest_common.sh@10 -- # set +x 00:26:55.696 Malloc0 00:26:55.696 23:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.696 23:12:28 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:55.696 23:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.696 23:12:28 -- common/autotest_common.sh@10 -- # set +x 00:26:55.696 23:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.696 23:12:28 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:55.696 23:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.696 23:12:28 -- common/autotest_common.sh@10 -- # set +x 00:26:55.696 23:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.696 23:12:28 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:55.696 23:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.696 23:12:28 -- common/autotest_common.sh@10 -- # set +x 00:26:55.955 [2024-07-24 23:12:28.126971] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.955 23:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.955 23:12:28 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:55.955 23:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.955 23:12:28 -- common/autotest_common.sh@10 -- # set +x 00:26:55.955 [2024-07-24 23:12:28.134932] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:55.955 23:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.955 23:12:28 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:55.955 23:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.955 23:12:28 -- common/autotest_common.sh@10 -- # set +x 00:26:55.955 Malloc1 00:26:55.955 23:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.956 23:12:28 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:55.956 23:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.956 23:12:28 -- common/autotest_common.sh@10 -- # set +x 00:26:55.956 23:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.956 23:12:28 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:55.956 23:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.956 23:12:28 -- common/autotest_common.sh@10 -- # set +x 00:26:55.956 23:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.956 23:12:28 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:55.956 23:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.956 23:12:28 -- common/autotest_common.sh@10 -- # set +x 00:26:55.956 23:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.956 23:12:28 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:55.956 23:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.956 23:12:28 -- common/autotest_common.sh@10 -- # set +x 00:26:55.956 23:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.956 23:12:28 -- host/multicontroller.sh@44 -- # bdevperf_pid=3331292 00:26:55.956 23:12:28 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:55.956 23:12:28 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:55.956 23:12:28 -- host/multicontroller.sh@47 -- # waitforlisten 3331292 /var/tmp/bdevperf.sock 00:26:55.956 23:12:28 -- common/autotest_common.sh@819 -- # '[' -z 3331292 ']' 00:26:55.956 23:12:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:55.956 23:12:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:55.956 23:12:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:55.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:55.956 23:12:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:55.956 23:12:28 -- common/autotest_common.sh@10 -- # set +x 00:26:56.893 23:12:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:56.893 23:12:29 -- common/autotest_common.sh@852 -- # return 0 00:26:56.893 23:12:29 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:56.893 23:12:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:56.893 23:12:29 -- common/autotest_common.sh@10 -- # set +x 00:26:56.893 NVMe0n1 00:26:56.893 23:12:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:56.893 23:12:29 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:56.893 23:12:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:56.893 23:12:29 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:56.893 23:12:29 -- common/autotest_common.sh@10 -- # set +x 00:26:56.893 23:12:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:56.893 1 00:26:56.893 23:12:29 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:56.893 23:12:29 -- common/autotest_common.sh@640 -- # local es=0 00:26:56.893 23:12:29 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:56.893 23:12:29 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:56.893 23:12:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:56.893 23:12:29 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:56.893 23:12:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:56.893 23:12:29 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:56.893 23:12:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:56.893 23:12:29 -- common/autotest_common.sh@10 -- # set +x 00:26:56.893 request: 00:26:56.893 { 00:26:56.893 "name": "NVMe0", 00:26:56.893 "trtype": "tcp", 00:26:56.893 "traddr": "10.0.0.2", 00:26:56.893 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:56.893 "hostaddr": "10.0.0.2", 00:26:56.893 "hostsvcid": "60000", 00:26:56.893 "adrfam": "ipv4", 00:26:56.893 "trsvcid": "4420", 00:26:56.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:56.893 "method": "bdev_nvme_attach_controller", 00:26:56.893 "req_id": 1 00:26:56.893 } 00:26:56.893 Got JSON-RPC error response 00:26:56.893 response: 00:26:56.893 { 00:26:56.893 "code": -114, 00:26:56.893 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:56.893 } 00:26:56.893 23:12:29 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:56.893 23:12:29 -- common/autotest_common.sh@643 -- # es=1 00:26:56.893 23:12:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:56.893 23:12:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:56.893 23:12:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:56.893 23:12:29 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:56.893 23:12:29 -- common/autotest_common.sh@640 -- # local es=0 00:26:56.893 23:12:29 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:56.893 23:12:29 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:56.893 23:12:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:56.893 23:12:29 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:56.893 23:12:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:56.893 23:12:29 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:56.893 23:12:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:56.893 23:12:29 -- common/autotest_common.sh@10 -- # set +x 00:26:56.893 request: 00:26:56.893 { 00:26:56.893 "name": "NVMe0", 00:26:56.893 "trtype": "tcp", 00:26:56.893 "traddr": "10.0.0.2", 00:26:56.893 "hostaddr": "10.0.0.2", 00:26:56.893 "hostsvcid": "60000", 00:26:56.893 "adrfam": "ipv4", 00:26:56.893 "trsvcid": "4420", 00:26:56.893 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:56.893 "method": "bdev_nvme_attach_controller", 00:26:56.893 "req_id": 1 00:26:56.893 } 00:26:56.893 Got JSON-RPC error response 00:26:56.893 response: 00:26:56.893 { 00:26:56.893 "code": -114, 00:26:56.893 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:56.893 } 00:26:56.893 23:12:29 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:56.893 23:12:29 -- common/autotest_common.sh@643 -- # es=1 00:26:56.893 23:12:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:56.893 23:12:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:56.893 23:12:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:56.893 23:12:29 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:56.893 23:12:29 -- common/autotest_common.sh@640 -- # local es=0 00:26:56.893 23:12:29 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:56.893 23:12:29 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:56.893 23:12:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:56.893 23:12:29 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:56.893 23:12:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:56.893 23:12:29 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:56.893 23:12:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:56.893 23:12:29 -- common/autotest_common.sh@10 -- # set +x 00:26:56.893 request: 00:26:56.893 { 00:26:56.893 "name": "NVMe0", 00:26:56.893 "trtype": "tcp", 00:26:56.893 "traddr": "10.0.0.2", 00:26:56.893 "hostaddr": "10.0.0.2", 00:26:56.893 "hostsvcid": "60000", 00:26:56.893 "adrfam": "ipv4", 00:26:56.893 "trsvcid": "4420", 00:26:56.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:56.893 "multipath": "disable", 00:26:56.893 "method": "bdev_nvme_attach_controller", 00:26:56.893 "req_id": 1 00:26:56.893 } 00:26:56.893 Got JSON-RPC error response 00:26:56.893 response: 00:26:56.893 { 00:26:56.893 "code": -114, 00:26:56.893 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:26:56.893 } 00:26:56.893 23:12:29 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:56.893 23:12:29 -- common/autotest_common.sh@643 -- # es=1 00:26:56.893 23:12:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:56.893 23:12:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:56.893 23:12:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:56.893 23:12:29 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:56.893 23:12:29 -- common/autotest_common.sh@640 -- # local es=0 00:26:56.893 23:12:29 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:56.893 23:12:29 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:56.893 23:12:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:56.893 23:12:29 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:56.893 23:12:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:56.893 23:12:29 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:56.893 23:12:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:56.893 23:12:29 -- common/autotest_common.sh@10 -- # set +x 00:26:56.893 request: 00:26:56.893 { 00:26:56.893 "name": "NVMe0", 00:26:56.893 "trtype": "tcp", 00:26:56.893 "traddr": "10.0.0.2", 00:26:56.893 "hostaddr": "10.0.0.2", 00:26:56.893 "hostsvcid": "60000", 00:26:56.893 "adrfam": "ipv4", 00:26:56.893 "trsvcid": "4420", 00:26:56.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:56.893 "multipath": "failover", 00:26:56.893 "method": "bdev_nvme_attach_controller", 00:26:56.893 "req_id": 1 00:26:56.893 } 00:26:56.893 Got JSON-RPC error response 00:26:56.893 response: 00:26:56.893 { 00:26:56.893 "code": -114, 00:26:56.893 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:56.893 } 00:26:56.893 23:12:29 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:56.893 23:12:29 -- common/autotest_common.sh@643 -- # es=1 00:26:56.893 23:12:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:56.894 23:12:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:56.894 23:12:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:56.894 23:12:29 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:56.894 23:12:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:56.894 23:12:29 -- common/autotest_common.sh@10 -- # set +x 00:26:57.152 00:26:57.152 23:12:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:57.152 23:12:29 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:57.152 23:12:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:57.152 23:12:29 -- common/autotest_common.sh@10 -- # set +x 00:26:57.152 23:12:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:57.152 23:12:29 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:57.152 23:12:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:57.152 23:12:29 -- common/autotest_common.sh@10 -- # set +x 00:26:57.152 00:26:57.152 23:12:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:57.152 23:12:29 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:57.152 23:12:29 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:57.152 23:12:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:57.152 23:12:29 -- common/autotest_common.sh@10 -- # set +x 00:26:57.152 23:12:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:57.152 23:12:29 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:57.152 23:12:29 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:58.531 0 00:26:58.531 23:12:30 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:58.531 23:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:58.531 23:12:30 -- common/autotest_common.sh@10 -- # set +x 00:26:58.531 23:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:58.531 23:12:30 -- host/multicontroller.sh@100 -- # killprocess 3331292 00:26:58.531 23:12:30 -- common/autotest_common.sh@926 -- # '[' -z 3331292 ']' 00:26:58.531 23:12:30 -- common/autotest_common.sh@930 -- # kill -0 3331292 00:26:58.531 23:12:30 -- common/autotest_common.sh@931 -- # uname 00:26:58.531 23:12:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:58.531 23:12:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3331292 00:26:58.531 23:12:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:58.531 23:12:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:58.531 23:12:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3331292' 00:26:58.531 killing process with pid 3331292 00:26:58.531 23:12:30 -- common/autotest_common.sh@945 -- # kill 3331292 00:26:58.531 23:12:30 -- common/autotest_common.sh@950 -- # wait 3331292 00:26:58.531 23:12:30 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:58.531 23:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:58.531 23:12:30 -- common/autotest_common.sh@10 -- # set +x 00:26:58.531 23:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:58.531 23:12:30 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:58.531 23:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:58.531 23:12:30 -- common/autotest_common.sh@10 -- # set +x 00:26:58.531 23:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:58.531 23:12:30 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:26:58.531 23:12:30 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:58.531 23:12:30 -- common/autotest_common.sh@1597 -- # read -r file 00:26:58.531 23:12:30 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:58.531 23:12:30 -- common/autotest_common.sh@1596 -- # sort -u 00:26:58.531 23:12:30 -- common/autotest_common.sh@1598 -- # cat 00:26:58.531 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:58.531 [2024-07-24 23:12:28.238810] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:58.531 [2024-07-24 23:12:28.238869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3331292 ] 00:26:58.531 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.531 [2024-07-24 23:12:28.311872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.531 [2024-07-24 23:12:28.349451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.531 [2024-07-24 23:12:29.532579] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 1b435074-c3f5-40e5-a451-590c8119e41f already exists 00:26:58.531 [2024-07-24 23:12:29.532609] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:1b435074-c3f5-40e5-a451-590c8119e41f alias for bdev NVMe1n1 00:26:58.531 [2024-07-24 23:12:29.532621] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:58.531 Running I/O for 1 seconds... 00:26:58.531 00:26:58.531 Latency(us) 00:26:58.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.531 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:58.531 NVMe0n1 : 1.00 25937.79 101.32 0.00 0.00 4925.24 3617.59 16777.22 00:26:58.531 =================================================================================================================== 00:26:58.531 Total : 25937.79 101.32 0.00 0.00 4925.24 3617.59 16777.22 00:26:58.531 Received shutdown signal, test time was about 1.000000 seconds 00:26:58.531 00:26:58.531 Latency(us) 00:26:58.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.531 =================================================================================================================== 00:26:58.531 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:58.531 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:58.531 23:12:30 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:58.531 23:12:30 -- common/autotest_common.sh@1597 -- # read -r file 00:26:58.531 23:12:30 -- host/multicontroller.sh@108 -- # nvmftestfini 00:26:58.531 23:12:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:58.531 23:12:30 -- nvmf/common.sh@116 -- # sync 00:26:58.531 23:12:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:58.790 23:12:30 -- nvmf/common.sh@119 -- # set +e 00:26:58.790 23:12:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:58.791 23:12:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:58.791 rmmod nvme_tcp 00:26:58.791 rmmod nvme_fabrics 00:26:58.791 rmmod nvme_keyring 00:26:58.791 23:12:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:58.791 23:12:31 -- nvmf/common.sh@123 -- # set -e 00:26:58.791 23:12:31 -- nvmf/common.sh@124 -- # return 0 00:26:58.791 23:12:31 -- nvmf/common.sh@477 -- # '[' -n 3331030 ']' 00:26:58.791 23:12:31 -- nvmf/common.sh@478 -- # killprocess 3331030 00:26:58.791 23:12:31 -- common/autotest_common.sh@926 -- # '[' -z 3331030 ']' 00:26:58.791 23:12:31 -- common/autotest_common.sh@930 -- # kill -0 3331030 00:26:58.791 23:12:31 -- common/autotest_common.sh@931 -- # uname 00:26:58.791 23:12:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:58.791 23:12:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3331030 00:26:58.791 23:12:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:58.791 23:12:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:58.791 23:12:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3331030' 00:26:58.791 killing process with pid 3331030 00:26:58.791 23:12:31 -- common/autotest_common.sh@945 -- # kill 3331030 00:26:58.791 23:12:31 -- common/autotest_common.sh@950 -- # wait 3331030 00:26:59.049 23:12:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:59.049 23:12:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:59.049 23:12:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:59.049 23:12:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:59.049 23:12:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:59.049 23:12:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.049 23:12:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:59.049 23:12:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.952 23:12:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:00.952 00:27:00.952 real 0m12.911s 00:27:00.952 user 0m16.529s 00:27:00.952 sys 0m5.954s 00:27:00.952 23:12:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:00.952 23:12:33 -- common/autotest_common.sh@10 -- # set +x 00:27:00.952 ************************************ 00:27:00.952 END TEST nvmf_multicontroller 00:27:00.952 ************************************ 00:27:01.211 23:12:33 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:01.211 23:12:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:01.211 23:12:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:01.211 23:12:33 -- common/autotest_common.sh@10 -- # set +x 00:27:01.211 ************************************ 00:27:01.211 START TEST nvmf_aer 00:27:01.211 ************************************ 00:27:01.211 23:12:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:01.211 * Looking for test storage... 00:27:01.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:01.211 23:12:33 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:01.211 23:12:33 -- nvmf/common.sh@7 -- # uname -s 00:27:01.211 23:12:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:01.211 23:12:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:01.211 23:12:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:01.211 23:12:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:01.211 23:12:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:01.211 23:12:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:01.211 23:12:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:01.211 23:12:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:01.211 23:12:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:01.211 23:12:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:01.211 23:12:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:01.211 23:12:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:01.211 23:12:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:01.211 23:12:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:01.211 23:12:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:01.211 23:12:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:01.211 23:12:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.211 23:12:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.211 23:12:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.211 23:12:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.211 23:12:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.211 23:12:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.211 23:12:33 -- paths/export.sh@5 -- # export PATH 00:27:01.211 23:12:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.211 23:12:33 -- nvmf/common.sh@46 -- # : 0 00:27:01.211 23:12:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:01.211 23:12:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:01.211 23:12:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:01.211 23:12:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:01.211 23:12:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:01.211 23:12:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:01.211 23:12:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:01.211 23:12:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:01.211 23:12:33 -- host/aer.sh@11 -- # nvmftestinit 00:27:01.212 23:12:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:01.212 23:12:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:01.212 23:12:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:01.212 23:12:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:01.212 23:12:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:01.212 23:12:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.212 23:12:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:01.212 23:12:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.212 23:12:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:01.212 23:12:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:01.212 23:12:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:01.212 23:12:33 -- common/autotest_common.sh@10 -- # set +x 00:27:07.842 23:12:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:07.842 23:12:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:07.842 23:12:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:07.842 23:12:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:07.842 23:12:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:07.842 23:12:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:07.842 23:12:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:07.842 23:12:39 -- nvmf/common.sh@294 -- # net_devs=() 00:27:07.842 23:12:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:07.842 23:12:39 -- nvmf/common.sh@295 -- # e810=() 00:27:07.842 23:12:39 -- nvmf/common.sh@295 -- # local -ga e810 00:27:07.842 23:12:39 -- nvmf/common.sh@296 -- # x722=() 00:27:07.842 23:12:39 -- nvmf/common.sh@296 -- # local -ga x722 00:27:07.842 23:12:39 -- nvmf/common.sh@297 -- # mlx=() 00:27:07.842 23:12:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:07.842 23:12:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.842 23:12:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.842 23:12:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.842 23:12:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.842 23:12:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.842 23:12:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.842 23:12:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.842 23:12:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.842 23:12:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.842 23:12:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.842 23:12:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.842 23:12:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:07.842 23:12:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:07.842 23:12:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:07.842 23:12:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:07.842 23:12:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:07.842 23:12:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:07.842 23:12:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:07.842 23:12:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:07.842 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:07.842 23:12:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:07.842 23:12:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:07.842 23:12:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.842 23:12:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.842 23:12:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:07.842 23:12:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:07.842 23:12:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:07.842 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:07.842 23:12:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:07.842 23:12:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:07.842 23:12:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.842 23:12:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.842 23:12:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:07.842 23:12:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:07.842 23:12:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:07.842 23:12:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:07.842 23:12:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:07.842 23:12:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.842 23:12:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:07.842 23:12:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.842 23:12:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:07.842 Found net devices under 0000:af:00.0: cvl_0_0 00:27:07.842 23:12:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.842 23:12:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:07.843 23:12:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.843 23:12:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:07.843 23:12:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.843 23:12:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:07.843 Found net devices under 0000:af:00.1: cvl_0_1 00:27:07.843 23:12:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.843 23:12:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:07.843 23:12:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:07.843 23:12:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:07.843 23:12:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:07.843 23:12:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:07.843 23:12:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:07.843 23:12:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:07.843 23:12:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.843 23:12:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:07.843 23:12:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:07.843 23:12:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:07.843 23:12:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:07.843 23:12:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:07.843 23:12:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:07.843 23:12:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:07.843 23:12:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:07.843 23:12:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:07.843 23:12:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:07.843 23:12:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:07.843 23:12:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:07.843 23:12:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:07.843 23:12:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:07.843 23:12:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:07.843 23:12:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:07.843 23:12:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:07.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:07.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:27:07.843 00:27:07.843 --- 10.0.0.2 ping statistics --- 00:27:07.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.843 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:27:07.843 23:12:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:07.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:07.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:27:07.843 00:27:07.843 --- 10.0.0.1 ping statistics --- 00:27:07.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.843 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:27:07.843 23:12:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:07.843 23:12:40 -- nvmf/common.sh@410 -- # return 0 00:27:07.843 23:12:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:07.843 23:12:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.843 23:12:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:07.843 23:12:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:07.843 23:12:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.843 23:12:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:07.843 23:12:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:07.843 23:12:40 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:07.843 23:12:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:07.843 23:12:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:07.843 23:12:40 -- common/autotest_common.sh@10 -- # set +x 00:27:07.843 23:12:40 -- nvmf/common.sh@469 -- # nvmfpid=3335301 00:27:07.843 23:12:40 -- nvmf/common.sh@470 -- # waitforlisten 3335301 00:27:07.843 23:12:40 -- common/autotest_common.sh@819 -- # '[' -z 3335301 ']' 00:27:07.843 23:12:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.843 23:12:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:07.843 23:12:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.843 23:12:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:07.843 23:12:40 -- common/autotest_common.sh@10 -- # set +x 00:27:07.843 23:12:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:07.843 [2024-07-24 23:12:40.189458] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:07.843 [2024-07-24 23:12:40.189507] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.843 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.843 [2024-07-24 23:12:40.266283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:08.102 [2024-07-24 23:12:40.305268] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:08.102 [2024-07-24 23:12:40.305381] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:08.102 [2024-07-24 23:12:40.305391] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:08.102 [2024-07-24 23:12:40.305400] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:08.102 [2024-07-24 23:12:40.305445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.102 [2024-07-24 23:12:40.305462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:08.102 [2024-07-24 23:12:40.305557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:08.102 [2024-07-24 23:12:40.305559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.671 23:12:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:08.671 23:12:40 -- common/autotest_common.sh@852 -- # return 0 00:27:08.671 23:12:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:08.671 23:12:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:08.671 23:12:40 -- common/autotest_common.sh@10 -- # set +x 00:27:08.671 23:12:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:08.671 23:12:41 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:08.671 23:12:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:08.671 23:12:41 -- common/autotest_common.sh@10 -- # set +x 00:27:08.671 [2024-07-24 23:12:41.041099] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.671 23:12:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:08.671 23:12:41 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:08.671 23:12:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:08.671 23:12:41 -- common/autotest_common.sh@10 -- # set +x 00:27:08.671 Malloc0 00:27:08.671 23:12:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:08.671 23:12:41 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:08.671 23:12:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:08.671 23:12:41 -- common/autotest_common.sh@10 -- # set +x 00:27:08.671 23:12:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:08.671 23:12:41 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:08.671 23:12:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:08.671 23:12:41 -- common/autotest_common.sh@10 -- # set +x 00:27:08.671 23:12:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:08.671 23:12:41 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:08.671 23:12:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:08.671 23:12:41 -- common/autotest_common.sh@10 -- # set +x 00:27:08.671 [2024-07-24 23:12:41.095785] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.671 23:12:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:08.671 23:12:41 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:08.671 23:12:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:08.930 23:12:41 -- common/autotest_common.sh@10 -- # set +x 00:27:08.930 [2024-07-24 23:12:41.103582] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:08.930 [ 00:27:08.930 { 00:27:08.930 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:08.930 "subtype": "Discovery", 00:27:08.930 "listen_addresses": [], 00:27:08.930 "allow_any_host": true, 00:27:08.930 "hosts": [] 00:27:08.930 }, 00:27:08.930 { 00:27:08.930 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:08.930 "subtype": "NVMe", 00:27:08.930 "listen_addresses": [ 00:27:08.930 { 00:27:08.930 "transport": "TCP", 00:27:08.930 "trtype": "TCP", 00:27:08.930 "adrfam": "IPv4", 00:27:08.930 "traddr": "10.0.0.2", 00:27:08.930 "trsvcid": "4420" 00:27:08.930 } 00:27:08.930 ], 00:27:08.930 "allow_any_host": true, 00:27:08.930 "hosts": [], 00:27:08.930 "serial_number": "SPDK00000000000001", 00:27:08.930 "model_number": "SPDK bdev Controller", 00:27:08.930 "max_namespaces": 2, 00:27:08.930 "min_cntlid": 1, 00:27:08.931 "max_cntlid": 65519, 00:27:08.931 "namespaces": [ 00:27:08.931 { 00:27:08.931 "nsid": 1, 00:27:08.931 "bdev_name": "Malloc0", 00:27:08.931 "name": "Malloc0", 00:27:08.931 "nguid": "64E107816346491AB629F52E30C930E5", 00:27:08.931 "uuid": "64e10781-6346-491a-b629-f52e30c930e5" 00:27:08.931 } 00:27:08.931 ] 00:27:08.931 } 00:27:08.931 ] 00:27:08.931 23:12:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:08.931 23:12:41 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:08.931 23:12:41 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:08.931 23:12:41 -- host/aer.sh@33 -- # aerpid=3335584 00:27:08.931 23:12:41 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:08.931 23:12:41 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:08.931 23:12:41 -- common/autotest_common.sh@1244 -- # local i=0 00:27:08.931 23:12:41 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:08.931 23:12:41 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:27:08.931 23:12:41 -- common/autotest_common.sh@1247 -- # i=1 00:27:08.931 23:12:41 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:27:08.931 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.931 23:12:41 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:08.931 23:12:41 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:27:08.931 23:12:41 -- common/autotest_common.sh@1247 -- # i=2 00:27:08.931 23:12:41 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:27:08.931 23:12:41 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:08.931 23:12:41 -- common/autotest_common.sh@1246 -- # '[' 2 -lt 200 ']' 00:27:08.931 23:12:41 -- common/autotest_common.sh@1247 -- # i=3 00:27:08.931 23:12:41 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:27:09.190 23:12:41 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:09.190 23:12:41 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:09.190 23:12:41 -- common/autotest_common.sh@1255 -- # return 0 00:27:09.190 23:12:41 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:09.190 23:12:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:09.190 23:12:41 -- common/autotest_common.sh@10 -- # set +x 00:27:09.190 Malloc1 00:27:09.190 23:12:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:09.190 23:12:41 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:09.190 23:12:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:09.190 23:12:41 -- common/autotest_common.sh@10 -- # set +x 00:27:09.190 23:12:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:09.190 23:12:41 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:09.190 23:12:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:09.190 23:12:41 -- common/autotest_common.sh@10 -- # set +x 00:27:09.190 Asynchronous Event Request test 00:27:09.190 Attaching to 10.0.0.2 00:27:09.190 Attached to 10.0.0.2 00:27:09.190 Registering asynchronous event callbacks... 00:27:09.190 Starting namespace attribute notice tests for all controllers... 00:27:09.190 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:09.190 aer_cb - Changed Namespace 00:27:09.190 Cleaning up... 00:27:09.190 [ 00:27:09.190 { 00:27:09.190 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:09.190 "subtype": "Discovery", 00:27:09.190 "listen_addresses": [], 00:27:09.190 "allow_any_host": true, 00:27:09.190 "hosts": [] 00:27:09.190 }, 00:27:09.190 { 00:27:09.190 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:09.190 "subtype": "NVMe", 00:27:09.190 "listen_addresses": [ 00:27:09.190 { 00:27:09.190 "transport": "TCP", 00:27:09.190 "trtype": "TCP", 00:27:09.190 "adrfam": "IPv4", 00:27:09.190 "traddr": "10.0.0.2", 00:27:09.190 "trsvcid": "4420" 00:27:09.190 } 00:27:09.190 ], 00:27:09.190 "allow_any_host": true, 00:27:09.190 "hosts": [], 00:27:09.190 "serial_number": "SPDK00000000000001", 00:27:09.190 "model_number": "SPDK bdev Controller", 00:27:09.190 "max_namespaces": 2, 00:27:09.190 "min_cntlid": 1, 00:27:09.190 "max_cntlid": 65519, 00:27:09.190 "namespaces": [ 00:27:09.190 { 00:27:09.190 "nsid": 1, 00:27:09.190 "bdev_name": "Malloc0", 00:27:09.190 "name": "Malloc0", 00:27:09.190 "nguid": "64E107816346491AB629F52E30C930E5", 00:27:09.190 "uuid": "64e10781-6346-491a-b629-f52e30c930e5" 00:27:09.190 }, 00:27:09.190 { 00:27:09.190 "nsid": 2, 00:27:09.190 "bdev_name": "Malloc1", 00:27:09.190 "name": "Malloc1", 00:27:09.190 "nguid": "F9480258FDBE45D3A4A1A4D00E5BA246", 00:27:09.190 "uuid": "f9480258-fdbe-45d3-a4a1-a4d00e5ba246" 00:27:09.190 } 00:27:09.190 ] 00:27:09.190 } 00:27:09.190 ] 00:27:09.190 23:12:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:09.190 23:12:41 -- host/aer.sh@43 -- # wait 3335584 00:27:09.190 23:12:41 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:09.190 23:12:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:09.190 23:12:41 -- common/autotest_common.sh@10 -- # set +x 00:27:09.190 23:12:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:09.190 23:12:41 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:09.190 23:12:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:09.190 23:12:41 -- common/autotest_common.sh@10 -- # set +x 00:27:09.190 23:12:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:09.190 23:12:41 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:09.190 23:12:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:09.190 23:12:41 -- common/autotest_common.sh@10 -- # set +x 00:27:09.190 23:12:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:09.190 23:12:41 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:09.190 23:12:41 -- host/aer.sh@51 -- # nvmftestfini 00:27:09.190 23:12:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:09.190 23:12:41 -- nvmf/common.sh@116 -- # sync 00:27:09.190 23:12:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:09.190 23:12:41 -- nvmf/common.sh@119 -- # set +e 00:27:09.190 23:12:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:09.190 23:12:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:09.190 rmmod nvme_tcp 00:27:09.190 rmmod nvme_fabrics 00:27:09.190 rmmod nvme_keyring 00:27:09.450 23:12:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:09.450 23:12:41 -- nvmf/common.sh@123 -- # set -e 00:27:09.450 23:12:41 -- nvmf/common.sh@124 -- # return 0 00:27:09.450 23:12:41 -- nvmf/common.sh@477 -- # '[' -n 3335301 ']' 00:27:09.450 23:12:41 -- nvmf/common.sh@478 -- # killprocess 3335301 00:27:09.450 23:12:41 -- common/autotest_common.sh@926 -- # '[' -z 3335301 ']' 00:27:09.450 23:12:41 -- common/autotest_common.sh@930 -- # kill -0 3335301 00:27:09.450 23:12:41 -- common/autotest_common.sh@931 -- # uname 00:27:09.450 23:12:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:09.450 23:12:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3335301 00:27:09.450 23:12:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:09.450 23:12:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:09.450 23:12:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3335301' 00:27:09.450 killing process with pid 3335301 00:27:09.450 23:12:41 -- common/autotest_common.sh@945 -- # kill 3335301 00:27:09.450 [2024-07-24 23:12:41.695919] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:09.450 23:12:41 -- common/autotest_common.sh@950 -- # wait 3335301 00:27:09.450 23:12:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:09.450 23:12:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:09.450 23:12:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:09.450 23:12:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:09.450 23:12:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:09.450 23:12:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.450 23:12:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:09.450 23:12:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.985 23:12:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:11.985 00:27:11.985 real 0m10.550s 00:27:11.985 user 0m8.076s 00:27:11.985 sys 0m5.534s 00:27:11.985 23:12:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:11.985 23:12:43 -- common/autotest_common.sh@10 -- # set +x 00:27:11.985 ************************************ 00:27:11.985 END TEST nvmf_aer 00:27:11.985 ************************************ 00:27:11.985 23:12:43 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:11.985 23:12:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:11.985 23:12:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:11.985 23:12:43 -- common/autotest_common.sh@10 -- # set +x 00:27:11.985 ************************************ 00:27:11.985 START TEST nvmf_async_init 00:27:11.985 ************************************ 00:27:11.985 23:12:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:11.985 * Looking for test storage... 00:27:11.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:11.985 23:12:44 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.985 23:12:44 -- nvmf/common.sh@7 -- # uname -s 00:27:11.985 23:12:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.985 23:12:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.985 23:12:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.985 23:12:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.985 23:12:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.985 23:12:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.985 23:12:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.985 23:12:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.985 23:12:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.985 23:12:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.985 23:12:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:11.985 23:12:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:11.985 23:12:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.985 23:12:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.985 23:12:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.985 23:12:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.985 23:12:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.985 23:12:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.985 23:12:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.985 23:12:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.985 23:12:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.985 23:12:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.985 23:12:44 -- paths/export.sh@5 -- # export PATH 00:27:11.985 23:12:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.985 23:12:44 -- nvmf/common.sh@46 -- # : 0 00:27:11.985 23:12:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:11.985 23:12:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:11.985 23:12:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:11.985 23:12:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.985 23:12:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.985 23:12:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:11.985 23:12:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:11.985 23:12:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:11.985 23:12:44 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:11.985 23:12:44 -- host/async_init.sh@14 -- # null_block_size=512 00:27:11.985 23:12:44 -- host/async_init.sh@15 -- # null_bdev=null0 00:27:11.985 23:12:44 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:11.985 23:12:44 -- host/async_init.sh@20 -- # uuidgen 00:27:11.985 23:12:44 -- host/async_init.sh@20 -- # tr -d - 00:27:11.985 23:12:44 -- host/async_init.sh@20 -- # nguid=bfe4c8d8b0c34bb6898bd7cbd7927c96 00:27:11.985 23:12:44 -- host/async_init.sh@22 -- # nvmftestinit 00:27:11.985 23:12:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:11.985 23:12:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.985 23:12:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:11.985 23:12:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:11.985 23:12:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:11.985 23:12:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.985 23:12:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:11.985 23:12:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.985 23:12:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:11.985 23:12:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:11.985 23:12:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:11.985 23:12:44 -- common/autotest_common.sh@10 -- # set +x 00:27:18.557 23:12:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:18.557 23:12:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:18.557 23:12:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:18.557 23:12:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:18.557 23:12:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:18.557 23:12:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:18.557 23:12:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:18.557 23:12:50 -- nvmf/common.sh@294 -- # net_devs=() 00:27:18.557 23:12:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:18.557 23:12:50 -- nvmf/common.sh@295 -- # e810=() 00:27:18.557 23:12:50 -- nvmf/common.sh@295 -- # local -ga e810 00:27:18.557 23:12:50 -- nvmf/common.sh@296 -- # x722=() 00:27:18.557 23:12:50 -- nvmf/common.sh@296 -- # local -ga x722 00:27:18.557 23:12:50 -- nvmf/common.sh@297 -- # mlx=() 00:27:18.557 23:12:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:18.557 23:12:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:18.557 23:12:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:18.557 23:12:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:18.557 23:12:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:18.557 23:12:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:18.557 23:12:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:18.557 23:12:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:18.557 23:12:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:18.557 23:12:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:18.557 23:12:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:18.557 23:12:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:18.557 23:12:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:18.557 23:12:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:18.557 23:12:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:18.557 23:12:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:18.557 23:12:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:18.557 23:12:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:18.557 23:12:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:18.557 23:12:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:18.557 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:18.557 23:12:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:18.557 23:12:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:18.557 23:12:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.557 23:12:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.557 23:12:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:18.557 23:12:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:18.557 23:12:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:18.557 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:18.557 23:12:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:18.557 23:12:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:18.557 23:12:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.557 23:12:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.557 23:12:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:18.557 23:12:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:18.557 23:12:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:18.557 23:12:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:18.557 23:12:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:18.557 23:12:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.557 23:12:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:18.557 23:12:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.557 23:12:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:18.557 Found net devices under 0000:af:00.0: cvl_0_0 00:27:18.557 23:12:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.557 23:12:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:18.557 23:12:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.557 23:12:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:18.557 23:12:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.557 23:12:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:18.557 Found net devices under 0000:af:00.1: cvl_0_1 00:27:18.557 23:12:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.557 23:12:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:18.557 23:12:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:18.557 23:12:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:18.557 23:12:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:18.557 23:12:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:18.557 23:12:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:18.557 23:12:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.557 23:12:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:18.557 23:12:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:18.557 23:12:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:18.557 23:12:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:18.557 23:12:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:18.557 23:12:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:18.557 23:12:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.557 23:12:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:18.557 23:12:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:18.557 23:12:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:18.557 23:12:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:18.557 23:12:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:18.557 23:12:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:18.557 23:12:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:18.557 23:12:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:18.557 23:12:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:18.557 23:12:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:18.557 23:12:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:18.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:27:18.557 00:27:18.557 --- 10.0.0.2 ping statistics --- 00:27:18.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.557 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:27:18.557 23:12:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:18.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:27:18.557 00:27:18.557 --- 10.0.0.1 ping statistics --- 00:27:18.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.557 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:27:18.557 23:12:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.557 23:12:50 -- nvmf/common.sh@410 -- # return 0 00:27:18.557 23:12:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:18.557 23:12:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.557 23:12:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:18.557 23:12:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:18.557 23:12:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.557 23:12:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:18.557 23:12:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:18.557 23:12:50 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:18.557 23:12:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:18.557 23:12:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:18.557 23:12:50 -- common/autotest_common.sh@10 -- # set +x 00:27:18.557 23:12:50 -- nvmf/common.sh@469 -- # nvmfpid=3339268 00:27:18.557 23:12:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:18.558 23:12:50 -- nvmf/common.sh@470 -- # waitforlisten 3339268 00:27:18.558 23:12:50 -- common/autotest_common.sh@819 -- # '[' -z 3339268 ']' 00:27:18.558 23:12:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.558 23:12:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:18.558 23:12:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.558 23:12:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:18.558 23:12:50 -- common/autotest_common.sh@10 -- # set +x 00:27:18.558 [2024-07-24 23:12:50.672833] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:18.558 [2024-07-24 23:12:50.672884] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.558 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.558 [2024-07-24 23:12:50.750718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.558 [2024-07-24 23:12:50.786506] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:18.558 [2024-07-24 23:12:50.786642] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.558 [2024-07-24 23:12:50.786651] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.558 [2024-07-24 23:12:50.786661] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.558 [2024-07-24 23:12:50.786689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.126 23:12:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:19.126 23:12:51 -- common/autotest_common.sh@852 -- # return 0 00:27:19.126 23:12:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:19.126 23:12:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:19.126 23:12:51 -- common/autotest_common.sh@10 -- # set +x 00:27:19.126 23:12:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:19.126 23:12:51 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:19.126 23:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.126 23:12:51 -- common/autotest_common.sh@10 -- # set +x 00:27:19.126 [2024-07-24 23:12:51.511249] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:19.126 23:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.126 23:12:51 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:19.126 23:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.126 23:12:51 -- common/autotest_common.sh@10 -- # set +x 00:27:19.126 null0 00:27:19.126 23:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.126 23:12:51 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:19.126 23:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.126 23:12:51 -- common/autotest_common.sh@10 -- # set +x 00:27:19.126 23:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.126 23:12:51 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:19.126 23:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.126 23:12:51 -- common/autotest_common.sh@10 -- # set +x 00:27:19.126 23:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.126 23:12:51 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g bfe4c8d8b0c34bb6898bd7cbd7927c96 00:27:19.127 23:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.127 23:12:51 -- common/autotest_common.sh@10 -- # set +x 00:27:19.127 23:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.127 23:12:51 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:19.127 23:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.127 23:12:51 -- common/autotest_common.sh@10 -- # set +x 00:27:19.127 [2024-07-24 23:12:51.555506] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:19.386 23:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.386 23:12:51 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:19.386 23:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.386 23:12:51 -- common/autotest_common.sh@10 -- # set +x 00:27:19.386 nvme0n1 00:27:19.386 23:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.386 23:12:51 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:19.386 23:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.386 23:12:51 -- common/autotest_common.sh@10 -- # set +x 00:27:19.386 [ 00:27:19.386 { 00:27:19.386 "name": "nvme0n1", 00:27:19.386 "aliases": [ 00:27:19.386 "bfe4c8d8-b0c3-4bb6-898b-d7cbd7927c96" 00:27:19.386 ], 00:27:19.386 "product_name": "NVMe disk", 00:27:19.386 "block_size": 512, 00:27:19.386 "num_blocks": 2097152, 00:27:19.386 "uuid": "bfe4c8d8-b0c3-4bb6-898b-d7cbd7927c96", 00:27:19.386 "assigned_rate_limits": { 00:27:19.386 "rw_ios_per_sec": 0, 00:27:19.386 "rw_mbytes_per_sec": 0, 00:27:19.386 "r_mbytes_per_sec": 0, 00:27:19.386 "w_mbytes_per_sec": 0 00:27:19.386 }, 00:27:19.386 "claimed": false, 00:27:19.386 "zoned": false, 00:27:19.386 "supported_io_types": { 00:27:19.386 "read": true, 00:27:19.386 "write": true, 00:27:19.386 "unmap": false, 00:27:19.386 "write_zeroes": true, 00:27:19.386 "flush": true, 00:27:19.386 "reset": true, 00:27:19.386 "compare": true, 00:27:19.386 "compare_and_write": true, 00:27:19.386 "abort": true, 00:27:19.646 "nvme_admin": true, 00:27:19.646 "nvme_io": true 00:27:19.646 }, 00:27:19.646 "driver_specific": { 00:27:19.646 "nvme": [ 00:27:19.646 { 00:27:19.646 "trid": { 00:27:19.646 "trtype": "TCP", 00:27:19.646 "adrfam": "IPv4", 00:27:19.646 "traddr": "10.0.0.2", 00:27:19.646 "trsvcid": "4420", 00:27:19.646 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:19.646 }, 00:27:19.646 "ctrlr_data": { 00:27:19.646 "cntlid": 1, 00:27:19.646 "vendor_id": "0x8086", 00:27:19.646 "model_number": "SPDK bdev Controller", 00:27:19.646 "serial_number": "00000000000000000000", 00:27:19.646 "firmware_revision": "24.01.1", 00:27:19.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:19.646 "oacs": { 00:27:19.646 "security": 0, 00:27:19.646 "format": 0, 00:27:19.646 "firmware": 0, 00:27:19.646 "ns_manage": 0 00:27:19.646 }, 00:27:19.646 "multi_ctrlr": true, 00:27:19.646 "ana_reporting": false 00:27:19.646 }, 00:27:19.646 "vs": { 00:27:19.646 "nvme_version": "1.3" 00:27:19.646 }, 00:27:19.646 "ns_data": { 00:27:19.646 "id": 1, 00:27:19.646 "can_share": true 00:27:19.646 } 00:27:19.646 } 00:27:19.646 ], 00:27:19.646 "mp_policy": "active_passive" 00:27:19.646 } 00:27:19.646 } 00:27:19.646 ] 00:27:19.646 23:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.646 23:12:51 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:19.646 23:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.646 23:12:51 -- common/autotest_common.sh@10 -- # set +x 00:27:19.646 [2024-07-24 23:12:51.832089] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:19.646 [2024-07-24 23:12:51.832147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397e40 (9): Bad file descriptor 00:27:19.646 [2024-07-24 23:12:51.973794] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:19.646 23:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.646 23:12:51 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:19.646 23:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.646 23:12:51 -- common/autotest_common.sh@10 -- # set +x 00:27:19.646 [ 00:27:19.646 { 00:27:19.646 "name": "nvme0n1", 00:27:19.646 "aliases": [ 00:27:19.646 "bfe4c8d8-b0c3-4bb6-898b-d7cbd7927c96" 00:27:19.646 ], 00:27:19.646 "product_name": "NVMe disk", 00:27:19.646 "block_size": 512, 00:27:19.646 "num_blocks": 2097152, 00:27:19.646 "uuid": "bfe4c8d8-b0c3-4bb6-898b-d7cbd7927c96", 00:27:19.646 "assigned_rate_limits": { 00:27:19.646 "rw_ios_per_sec": 0, 00:27:19.646 "rw_mbytes_per_sec": 0, 00:27:19.646 "r_mbytes_per_sec": 0, 00:27:19.646 "w_mbytes_per_sec": 0 00:27:19.646 }, 00:27:19.646 "claimed": false, 00:27:19.646 "zoned": false, 00:27:19.646 "supported_io_types": { 00:27:19.646 "read": true, 00:27:19.646 "write": true, 00:27:19.646 "unmap": false, 00:27:19.646 "write_zeroes": true, 00:27:19.646 "flush": true, 00:27:19.646 "reset": true, 00:27:19.646 "compare": true, 00:27:19.646 "compare_and_write": true, 00:27:19.646 "abort": true, 00:27:19.646 "nvme_admin": true, 00:27:19.646 "nvme_io": true 00:27:19.646 }, 00:27:19.646 "driver_specific": { 00:27:19.646 "nvme": [ 00:27:19.646 { 00:27:19.646 "trid": { 00:27:19.646 "trtype": "TCP", 00:27:19.646 "adrfam": "IPv4", 00:27:19.646 "traddr": "10.0.0.2", 00:27:19.646 "trsvcid": "4420", 00:27:19.646 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:19.646 }, 00:27:19.646 "ctrlr_data": { 00:27:19.646 "cntlid": 2, 00:27:19.646 "vendor_id": "0x8086", 00:27:19.646 "model_number": "SPDK bdev Controller", 00:27:19.646 "serial_number": "00000000000000000000", 00:27:19.646 "firmware_revision": "24.01.1", 00:27:19.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:19.646 "oacs": { 00:27:19.646 "security": 0, 00:27:19.646 "format": 0, 00:27:19.646 "firmware": 0, 00:27:19.646 "ns_manage": 0 00:27:19.646 }, 00:27:19.646 "multi_ctrlr": true, 00:27:19.646 "ana_reporting": false 00:27:19.646 }, 00:27:19.646 "vs": { 00:27:19.646 "nvme_version": "1.3" 00:27:19.646 }, 00:27:19.646 "ns_data": { 00:27:19.646 "id": 1, 00:27:19.646 "can_share": true 00:27:19.646 } 00:27:19.646 } 00:27:19.646 ], 00:27:19.646 "mp_policy": "active_passive" 00:27:19.646 } 00:27:19.646 } 00:27:19.646 ] 00:27:19.646 23:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.646 23:12:52 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.646 23:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.646 23:12:52 -- common/autotest_common.sh@10 -- # set +x 00:27:19.646 23:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.646 23:12:52 -- host/async_init.sh@53 -- # mktemp 00:27:19.646 23:12:52 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.oo5Ih0ZltB 00:27:19.646 23:12:52 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:19.646 23:12:52 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.oo5Ih0ZltB 00:27:19.646 23:12:52 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:19.646 23:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.646 23:12:52 -- common/autotest_common.sh@10 -- # set +x 00:27:19.646 23:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.646 23:12:52 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:19.646 23:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.646 23:12:52 -- common/autotest_common.sh@10 -- # set +x 00:27:19.646 [2024-07-24 23:12:52.044729] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:19.646 [2024-07-24 23:12:52.044852] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:19.646 23:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.646 23:12:52 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oo5Ih0ZltB 00:27:19.646 23:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.646 23:12:52 -- common/autotest_common.sh@10 -- # set +x 00:27:19.646 23:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.646 23:12:52 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oo5Ih0ZltB 00:27:19.646 23:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.646 23:12:52 -- common/autotest_common.sh@10 -- # set +x 00:27:19.646 [2024-07-24 23:12:52.064780] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:19.906 nvme0n1 00:27:19.906 23:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.906 23:12:52 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:19.906 23:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.906 23:12:52 -- common/autotest_common.sh@10 -- # set +x 00:27:19.906 [ 00:27:19.906 { 00:27:19.906 "name": "nvme0n1", 00:27:19.906 "aliases": [ 00:27:19.906 "bfe4c8d8-b0c3-4bb6-898b-d7cbd7927c96" 00:27:19.906 ], 00:27:19.906 "product_name": "NVMe disk", 00:27:19.906 "block_size": 512, 00:27:19.906 "num_blocks": 2097152, 00:27:19.906 "uuid": "bfe4c8d8-b0c3-4bb6-898b-d7cbd7927c96", 00:27:19.906 "assigned_rate_limits": { 00:27:19.906 "rw_ios_per_sec": 0, 00:27:19.906 "rw_mbytes_per_sec": 0, 00:27:19.906 "r_mbytes_per_sec": 0, 00:27:19.906 "w_mbytes_per_sec": 0 00:27:19.906 }, 00:27:19.906 "claimed": false, 00:27:19.906 "zoned": false, 00:27:19.906 "supported_io_types": { 00:27:19.906 "read": true, 00:27:19.906 "write": true, 00:27:19.906 "unmap": false, 00:27:19.906 "write_zeroes": true, 00:27:19.906 "flush": true, 00:27:19.906 "reset": true, 00:27:19.906 "compare": true, 00:27:19.906 "compare_and_write": true, 00:27:19.906 "abort": true, 00:27:19.906 "nvme_admin": true, 00:27:19.906 "nvme_io": true 00:27:19.906 }, 00:27:19.906 "driver_specific": { 00:27:19.906 "nvme": [ 00:27:19.906 { 00:27:19.906 "trid": { 00:27:19.906 "trtype": "TCP", 00:27:19.906 "adrfam": "IPv4", 00:27:19.906 "traddr": "10.0.0.2", 00:27:19.906 "trsvcid": "4421", 00:27:19.906 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:19.906 }, 00:27:19.906 "ctrlr_data": { 00:27:19.906 "cntlid": 3, 00:27:19.906 "vendor_id": "0x8086", 00:27:19.906 "model_number": "SPDK bdev Controller", 00:27:19.906 "serial_number": "00000000000000000000", 00:27:19.906 "firmware_revision": "24.01.1", 00:27:19.906 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:19.906 "oacs": { 00:27:19.906 "security": 0, 00:27:19.906 "format": 0, 00:27:19.906 "firmware": 0, 00:27:19.906 "ns_manage": 0 00:27:19.906 }, 00:27:19.906 "multi_ctrlr": true, 00:27:19.906 "ana_reporting": false 00:27:19.906 }, 00:27:19.906 "vs": { 00:27:19.906 "nvme_version": "1.3" 00:27:19.906 }, 00:27:19.906 "ns_data": { 00:27:19.906 "id": 1, 00:27:19.906 "can_share": true 00:27:19.906 } 00:27:19.906 } 00:27:19.906 ], 00:27:19.906 "mp_policy": "active_passive" 00:27:19.906 } 00:27:19.906 } 00:27:19.906 ] 00:27:19.906 23:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.906 23:12:52 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.906 23:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.906 23:12:52 -- common/autotest_common.sh@10 -- # set +x 00:27:19.906 23:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.906 23:12:52 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.oo5Ih0ZltB 00:27:19.906 23:12:52 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:19.906 23:12:52 -- host/async_init.sh@78 -- # nvmftestfini 00:27:19.906 23:12:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:19.906 23:12:52 -- nvmf/common.sh@116 -- # sync 00:27:19.906 23:12:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:19.906 23:12:52 -- nvmf/common.sh@119 -- # set +e 00:27:19.906 23:12:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:19.906 23:12:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:19.906 rmmod nvme_tcp 00:27:19.906 rmmod nvme_fabrics 00:27:19.906 rmmod nvme_keyring 00:27:19.906 23:12:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:19.906 23:12:52 -- nvmf/common.sh@123 -- # set -e 00:27:19.906 23:12:52 -- nvmf/common.sh@124 -- # return 0 00:27:19.906 23:12:52 -- nvmf/common.sh@477 -- # '[' -n 3339268 ']' 00:27:19.906 23:12:52 -- nvmf/common.sh@478 -- # killprocess 3339268 00:27:19.906 23:12:52 -- common/autotest_common.sh@926 -- # '[' -z 3339268 ']' 00:27:19.906 23:12:52 -- common/autotest_common.sh@930 -- # kill -0 3339268 00:27:19.906 23:12:52 -- common/autotest_common.sh@931 -- # uname 00:27:19.906 23:12:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:19.906 23:12:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3339268 00:27:19.906 23:12:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:19.906 23:12:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:19.906 23:12:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3339268' 00:27:19.906 killing process with pid 3339268 00:27:19.906 23:12:52 -- common/autotest_common.sh@945 -- # kill 3339268 00:27:19.906 23:12:52 -- common/autotest_common.sh@950 -- # wait 3339268 00:27:20.166 23:12:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:20.166 23:12:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:20.166 23:12:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:20.166 23:12:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:20.166 23:12:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:20.166 23:12:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.166 23:12:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:20.166 23:12:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.712 23:12:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:22.712 00:27:22.712 real 0m10.547s 00:27:22.712 user 0m3.715s 00:27:22.712 sys 0m5.427s 00:27:22.712 23:12:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:22.712 23:12:54 -- common/autotest_common.sh@10 -- # set +x 00:27:22.712 ************************************ 00:27:22.712 END TEST nvmf_async_init 00:27:22.712 ************************************ 00:27:22.712 23:12:54 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:22.712 23:12:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:22.712 23:12:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:22.712 23:12:54 -- common/autotest_common.sh@10 -- # set +x 00:27:22.712 ************************************ 00:27:22.712 START TEST dma 00:27:22.712 ************************************ 00:27:22.712 23:12:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:22.712 * Looking for test storage... 00:27:22.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:22.712 23:12:54 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:22.712 23:12:54 -- nvmf/common.sh@7 -- # uname -s 00:27:22.712 23:12:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:22.712 23:12:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:22.712 23:12:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:22.712 23:12:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:22.712 23:12:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:22.712 23:12:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:22.713 23:12:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:22.713 23:12:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:22.713 23:12:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:22.713 23:12:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:22.713 23:12:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:22.713 23:12:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:22.713 23:12:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:22.713 23:12:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:22.713 23:12:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:22.713 23:12:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:22.713 23:12:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:22.713 23:12:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:22.713 23:12:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:22.713 23:12:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.713 23:12:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.713 23:12:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.713 23:12:54 -- paths/export.sh@5 -- # export PATH 00:27:22.713 23:12:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.713 23:12:54 -- nvmf/common.sh@46 -- # : 0 00:27:22.713 23:12:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:22.713 23:12:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:22.713 23:12:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:22.713 23:12:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:22.713 23:12:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:22.713 23:12:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:22.713 23:12:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:22.713 23:12:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:22.713 23:12:54 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:22.713 23:12:54 -- host/dma.sh@13 -- # exit 0 00:27:22.713 00:27:22.713 real 0m0.123s 00:27:22.713 user 0m0.044s 00:27:22.713 sys 0m0.088s 00:27:22.713 23:12:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:22.713 23:12:54 -- common/autotest_common.sh@10 -- # set +x 00:27:22.713 ************************************ 00:27:22.713 END TEST dma 00:27:22.713 ************************************ 00:27:22.713 23:12:54 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:22.713 23:12:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:22.713 23:12:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:22.713 23:12:54 -- common/autotest_common.sh@10 -- # set +x 00:27:22.713 ************************************ 00:27:22.713 START TEST nvmf_identify 00:27:22.713 ************************************ 00:27:22.713 23:12:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:22.713 * Looking for test storage... 00:27:22.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:22.713 23:12:54 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:22.713 23:12:54 -- nvmf/common.sh@7 -- # uname -s 00:27:22.713 23:12:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:22.713 23:12:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:22.713 23:12:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:22.713 23:12:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:22.713 23:12:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:22.713 23:12:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:22.713 23:12:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:22.713 23:12:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:22.713 23:12:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:22.713 23:12:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:22.713 23:12:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:22.713 23:12:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:22.713 23:12:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:22.713 23:12:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:22.713 23:12:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:22.713 23:12:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:22.713 23:12:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:22.713 23:12:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:22.713 23:12:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:22.713 23:12:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.713 23:12:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.713 23:12:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.713 23:12:54 -- paths/export.sh@5 -- # export PATH 00:27:22.713 23:12:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.713 23:12:54 -- nvmf/common.sh@46 -- # : 0 00:27:22.713 23:12:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:22.713 23:12:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:22.713 23:12:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:22.713 23:12:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:22.713 23:12:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:22.713 23:12:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:22.713 23:12:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:22.713 23:12:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:22.713 23:12:54 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:22.713 23:12:54 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:22.713 23:12:54 -- host/identify.sh@14 -- # nvmftestinit 00:27:22.713 23:12:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:22.713 23:12:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:22.713 23:12:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:22.713 23:12:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:22.713 23:12:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:22.713 23:12:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.713 23:12:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:22.713 23:12:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.713 23:12:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:22.713 23:12:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:22.713 23:12:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:22.713 23:12:54 -- common/autotest_common.sh@10 -- # set +x 00:27:29.286 23:13:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:29.286 23:13:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:29.286 23:13:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:29.286 23:13:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:29.286 23:13:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:29.286 23:13:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:29.286 23:13:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:29.286 23:13:01 -- nvmf/common.sh@294 -- # net_devs=() 00:27:29.286 23:13:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:29.286 23:13:01 -- nvmf/common.sh@295 -- # e810=() 00:27:29.286 23:13:01 -- nvmf/common.sh@295 -- # local -ga e810 00:27:29.286 23:13:01 -- nvmf/common.sh@296 -- # x722=() 00:27:29.286 23:13:01 -- nvmf/common.sh@296 -- # local -ga x722 00:27:29.286 23:13:01 -- nvmf/common.sh@297 -- # mlx=() 00:27:29.286 23:13:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:29.286 23:13:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.286 23:13:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.286 23:13:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.286 23:13:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.286 23:13:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.287 23:13:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.287 23:13:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.287 23:13:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.287 23:13:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.287 23:13:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.287 23:13:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.287 23:13:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:29.287 23:13:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:29.287 23:13:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:29.287 23:13:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:29.287 23:13:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:29.287 23:13:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:29.287 23:13:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:29.287 23:13:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:29.287 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:29.287 23:13:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:29.287 23:13:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:29.287 23:13:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.287 23:13:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.287 23:13:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:29.287 23:13:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:29.287 23:13:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:29.287 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:29.287 23:13:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:29.287 23:13:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:29.287 23:13:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.287 23:13:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.287 23:13:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:29.287 23:13:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:29.287 23:13:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:29.287 23:13:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:29.287 23:13:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:29.287 23:13:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.287 23:13:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:29.287 23:13:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.287 23:13:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:29.287 Found net devices under 0000:af:00.0: cvl_0_0 00:27:29.287 23:13:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.287 23:13:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:29.287 23:13:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.287 23:13:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:29.287 23:13:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.287 23:13:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:29.287 Found net devices under 0000:af:00.1: cvl_0_1 00:27:29.287 23:13:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.287 23:13:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:29.287 23:13:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:29.287 23:13:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:29.287 23:13:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:29.287 23:13:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:29.287 23:13:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.287 23:13:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.287 23:13:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:29.287 23:13:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:29.287 23:13:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:29.287 23:13:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:29.287 23:13:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:29.287 23:13:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:29.287 23:13:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.287 23:13:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:29.287 23:13:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:29.287 23:13:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:29.287 23:13:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:29.287 23:13:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:29.287 23:13:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:29.287 23:13:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:29.287 23:13:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:29.287 23:13:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:29.547 23:13:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:29.547 23:13:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:29.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:27:29.547 00:27:29.547 --- 10.0.0.2 ping statistics --- 00:27:29.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.547 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:27:29.547 23:13:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:29.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:27:29.547 00:27:29.547 --- 10.0.0.1 ping statistics --- 00:27:29.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.547 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:27:29.547 23:13:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.547 23:13:01 -- nvmf/common.sh@410 -- # return 0 00:27:29.547 23:13:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:29.547 23:13:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:29.547 23:13:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:29.547 23:13:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:29.547 23:13:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:29.547 23:13:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:29.547 23:13:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:29.547 23:13:01 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:29.547 23:13:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:29.547 23:13:01 -- common/autotest_common.sh@10 -- # set +x 00:27:29.547 23:13:01 -- host/identify.sh@19 -- # nvmfpid=3343304 00:27:29.547 23:13:01 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:29.547 23:13:01 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:29.547 23:13:01 -- host/identify.sh@23 -- # waitforlisten 3343304 00:27:29.547 23:13:01 -- common/autotest_common.sh@819 -- # '[' -z 3343304 ']' 00:27:29.547 23:13:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.547 23:13:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:29.547 23:13:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.547 23:13:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:29.547 23:13:01 -- common/autotest_common.sh@10 -- # set +x 00:27:29.547 [2024-07-24 23:13:01.855053] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:29.547 [2024-07-24 23:13:01.855104] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.547 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.547 [2024-07-24 23:13:01.933014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:29.547 [2024-07-24 23:13:01.971117] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:29.547 [2024-07-24 23:13:01.971249] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.547 [2024-07-24 23:13:01.971262] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.547 [2024-07-24 23:13:01.971271] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.547 [2024-07-24 23:13:01.971319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.547 [2024-07-24 23:13:01.971414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:29.547 [2024-07-24 23:13:01.971508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:29.547 [2024-07-24 23:13:01.971509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.488 23:13:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:30.488 23:13:02 -- common/autotest_common.sh@852 -- # return 0 00:27:30.488 23:13:02 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:30.488 23:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.488 23:13:02 -- common/autotest_common.sh@10 -- # set +x 00:27:30.488 [2024-07-24 23:13:02.646889] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.488 23:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.488 23:13:02 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:30.488 23:13:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:30.488 23:13:02 -- common/autotest_common.sh@10 -- # set +x 00:27:30.488 23:13:02 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:30.488 23:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.488 23:13:02 -- common/autotest_common.sh@10 -- # set +x 00:27:30.488 Malloc0 00:27:30.488 23:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.488 23:13:02 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:30.488 23:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.488 23:13:02 -- common/autotest_common.sh@10 -- # set +x 00:27:30.488 23:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.488 23:13:02 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:30.488 23:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.488 23:13:02 -- common/autotest_common.sh@10 -- # set +x 00:27:30.488 23:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.488 23:13:02 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:30.488 23:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.488 23:13:02 -- common/autotest_common.sh@10 -- # set +x 00:27:30.488 [2024-07-24 23:13:02.745813] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.488 23:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.488 23:13:02 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:30.488 23:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.488 23:13:02 -- common/autotest_common.sh@10 -- # set +x 00:27:30.488 23:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.488 23:13:02 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:30.488 23:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.488 23:13:02 -- common/autotest_common.sh@10 -- # set +x 00:27:30.488 [2024-07-24 23:13:02.761608] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:30.488 [ 00:27:30.488 { 00:27:30.488 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:30.488 "subtype": "Discovery", 00:27:30.488 "listen_addresses": [ 00:27:30.488 { 00:27:30.488 "transport": "TCP", 00:27:30.488 "trtype": "TCP", 00:27:30.488 "adrfam": "IPv4", 00:27:30.488 "traddr": "10.0.0.2", 00:27:30.488 "trsvcid": "4420" 00:27:30.488 } 00:27:30.488 ], 00:27:30.488 "allow_any_host": true, 00:27:30.488 "hosts": [] 00:27:30.488 }, 00:27:30.488 { 00:27:30.488 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:30.488 "subtype": "NVMe", 00:27:30.488 "listen_addresses": [ 00:27:30.488 { 00:27:30.488 "transport": "TCP", 00:27:30.488 "trtype": "TCP", 00:27:30.488 "adrfam": "IPv4", 00:27:30.488 "traddr": "10.0.0.2", 00:27:30.488 "trsvcid": "4420" 00:27:30.488 } 00:27:30.488 ], 00:27:30.488 "allow_any_host": true, 00:27:30.488 "hosts": [], 00:27:30.488 "serial_number": "SPDK00000000000001", 00:27:30.488 "model_number": "SPDK bdev Controller", 00:27:30.488 "max_namespaces": 32, 00:27:30.488 "min_cntlid": 1, 00:27:30.488 "max_cntlid": 65519, 00:27:30.488 "namespaces": [ 00:27:30.488 { 00:27:30.488 "nsid": 1, 00:27:30.488 "bdev_name": "Malloc0", 00:27:30.488 "name": "Malloc0", 00:27:30.488 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:30.488 "eui64": "ABCDEF0123456789", 00:27:30.488 "uuid": "22bb1d9e-7e32-41da-8d1f-530b134ce8bc" 00:27:30.488 } 00:27:30.488 ] 00:27:30.488 } 00:27:30.488 ] 00:27:30.488 23:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.488 23:13:02 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:30.488 [2024-07-24 23:13:02.804137] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:30.488 [2024-07-24 23:13:02.804175] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3343346 ] 00:27:30.488 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.488 [2024-07-24 23:13:02.836085] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:30.488 [2024-07-24 23:13:02.836133] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:30.488 [2024-07-24 23:13:02.836139] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:30.488 [2024-07-24 23:13:02.836153] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:30.488 [2024-07-24 23:13:02.836163] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:30.488 [2024-07-24 23:13:02.836600] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:30.488 [2024-07-24 23:13:02.836636] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13df460 0 00:27:30.488 [2024-07-24 23:13:02.850725] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:30.488 [2024-07-24 23:13:02.850738] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:30.488 [2024-07-24 23:13:02.850744] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:30.488 [2024-07-24 23:13:02.850749] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:30.488 [2024-07-24 23:13:02.850791] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.488 [2024-07-24 23:13:02.850798] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.488 [2024-07-24 23:13:02.850803] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13df460) 00:27:30.488 [2024-07-24 23:13:02.850817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:30.488 [2024-07-24 23:13:02.850835] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a590, cid 0, qid 0 00:27:30.488 [2024-07-24 23:13:02.858723] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.488 [2024-07-24 23:13:02.858732] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.488 [2024-07-24 23:13:02.858737] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.488 [2024-07-24 23:13:02.858742] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144a590) on tqpair=0x13df460 00:27:30.488 [2024-07-24 23:13:02.858757] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:30.488 [2024-07-24 23:13:02.858764] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:30.488 [2024-07-24 23:13:02.858771] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:30.488 [2024-07-24 23:13:02.858784] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.488 [2024-07-24 23:13:02.858789] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.488 [2024-07-24 23:13:02.858793] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13df460) 00:27:30.488 [2024-07-24 23:13:02.858802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.488 [2024-07-24 23:13:02.858816] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a590, cid 0, qid 0 00:27:30.488 [2024-07-24 23:13:02.859088] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.488 [2024-07-24 23:13:02.859098] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.488 [2024-07-24 23:13:02.859102] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.488 [2024-07-24 23:13:02.859107] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144a590) on tqpair=0x13df460 00:27:30.488 [2024-07-24 23:13:02.859114] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:30.488 [2024-07-24 23:13:02.859124] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:30.488 [2024-07-24 23:13:02.859131] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.488 [2024-07-24 23:13:02.859136] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.488 [2024-07-24 23:13:02.859141] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13df460) 00:27:30.488 [2024-07-24 23:13:02.859148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.488 [2024-07-24 23:13:02.859159] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a590, cid 0, qid 0 00:27:30.488 [2024-07-24 23:13:02.859321] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.488 [2024-07-24 23:13:02.859328] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.489 [2024-07-24 23:13:02.859332] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.859337] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144a590) on tqpair=0x13df460 00:27:30.489 [2024-07-24 23:13:02.859344] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:30.489 [2024-07-24 23:13:02.859354] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:30.489 [2024-07-24 23:13:02.859362] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.859366] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.859371] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13df460) 00:27:30.489 [2024-07-24 23:13:02.859378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.489 [2024-07-24 23:13:02.859389] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a590, cid 0, qid 0 00:27:30.489 [2024-07-24 23:13:02.859487] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.489 [2024-07-24 23:13:02.859494] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.489 [2024-07-24 23:13:02.859499] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.859503] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144a590) on tqpair=0x13df460 00:27:30.489 [2024-07-24 23:13:02.859510] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:30.489 [2024-07-24 23:13:02.859522] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.859526] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.859531] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13df460) 00:27:30.489 [2024-07-24 23:13:02.859538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.489 [2024-07-24 23:13:02.859549] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a590, cid 0, qid 0 00:27:30.489 [2024-07-24 23:13:02.859639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.489 [2024-07-24 23:13:02.859646] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.489 [2024-07-24 23:13:02.859650] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.859657] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144a590) on tqpair=0x13df460 00:27:30.489 [2024-07-24 23:13:02.859664] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:30.489 [2024-07-24 23:13:02.859670] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:30.489 [2024-07-24 23:13:02.859680] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:30.489 [2024-07-24 23:13:02.859786] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:30.489 [2024-07-24 23:13:02.859793] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:30.489 [2024-07-24 23:13:02.859802] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.859806] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.859811] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13df460) 00:27:30.489 [2024-07-24 23:13:02.859818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.489 [2024-07-24 23:13:02.859830] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a590, cid 0, qid 0 00:27:30.489 [2024-07-24 23:13:02.859913] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.489 [2024-07-24 23:13:02.859920] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.489 [2024-07-24 23:13:02.859924] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.859929] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144a590) on tqpair=0x13df460 00:27:30.489 [2024-07-24 23:13:02.859936] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:30.489 [2024-07-24 23:13:02.859946] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.859951] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.859955] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13df460) 00:27:30.489 [2024-07-24 23:13:02.859962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.489 [2024-07-24 23:13:02.859973] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a590, cid 0, qid 0 00:27:30.489 [2024-07-24 23:13:02.860058] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.489 [2024-07-24 23:13:02.860065] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.489 [2024-07-24 23:13:02.860069] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.860074] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144a590) on tqpair=0x13df460 00:27:30.489 [2024-07-24 23:13:02.860080] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:30.489 [2024-07-24 23:13:02.860086] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:30.489 [2024-07-24 23:13:02.860095] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:30.489 [2024-07-24 23:13:02.860110] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:30.489 [2024-07-24 23:13:02.860120] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.860125] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.860131] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13df460) 00:27:30.489 [2024-07-24 23:13:02.860139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.489 [2024-07-24 23:13:02.860151] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a590, cid 0, qid 0 00:27:30.489 [2024-07-24 23:13:02.860263] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:30.489 [2024-07-24 23:13:02.860270] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:30.489 [2024-07-24 23:13:02.860275] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.860280] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13df460): datao=0, datal=4096, cccid=0 00:27:30.489 [2024-07-24 23:13:02.860285] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x144a590) on tqpair(0x13df460): expected_datao=0, payload_size=4096 00:27:30.489 [2024-07-24 23:13:02.860408] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.860413] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.860472] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.489 [2024-07-24 23:13:02.860479] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.489 [2024-07-24 23:13:02.860483] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.860488] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144a590) on tqpair=0x13df460 00:27:30.489 [2024-07-24 23:13:02.860497] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:30.489 [2024-07-24 23:13:02.860504] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:30.489 [2024-07-24 23:13:02.860511] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:30.489 [2024-07-24 23:13:02.860518] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:30.489 [2024-07-24 23:13:02.860523] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:30.489 [2024-07-24 23:13:02.860529] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:30.489 [2024-07-24 23:13:02.860543] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:30.489 [2024-07-24 23:13:02.860551] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.860556] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.860560] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13df460) 00:27:30.489 [2024-07-24 23:13:02.860568] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:30.489 [2024-07-24 23:13:02.860580] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a590, cid 0, qid 0 00:27:30.489 [2024-07-24 23:13:02.860672] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.489 [2024-07-24 23:13:02.860679] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.489 [2024-07-24 23:13:02.860683] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.860688] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144a590) on tqpair=0x13df460 00:27:30.489 [2024-07-24 23:13:02.860697] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.860702] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.860706] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13df460) 00:27:30.489 [2024-07-24 23:13:02.860713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.489 [2024-07-24 23:13:02.860728] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.860733] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.860738] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13df460) 00:27:30.489 [2024-07-24 23:13:02.860744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.489 [2024-07-24 23:13:02.860751] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.860755] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.860760] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13df460) 00:27:30.489 [2024-07-24 23:13:02.860766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.489 [2024-07-24 23:13:02.860773] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.489 [2024-07-24 23:13:02.860777] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.860782] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df460) 00:27:30.490 [2024-07-24 23:13:02.860788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.490 [2024-07-24 23:13:02.860794] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:30.490 [2024-07-24 23:13:02.860807] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:30.490 [2024-07-24 23:13:02.860814] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.860819] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.860823] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13df460) 00:27:30.490 [2024-07-24 23:13:02.860830] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.490 [2024-07-24 23:13:02.860844] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a590, cid 0, qid 0 00:27:30.490 [2024-07-24 23:13:02.860849] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a6f0, cid 1, qid 0 00:27:30.490 [2024-07-24 23:13:02.860855] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a850, cid 2, qid 0 00:27:30.490 [2024-07-24 23:13:02.860860] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a9b0, cid 3, qid 0 00:27:30.490 [2024-07-24 23:13:02.860865] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144ab10, cid 4, qid 0 00:27:30.490 [2024-07-24 23:13:02.861059] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.490 [2024-07-24 23:13:02.861066] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.490 [2024-07-24 23:13:02.861071] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.861075] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144ab10) on tqpair=0x13df460 00:27:30.490 [2024-07-24 23:13:02.861083] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:30.490 [2024-07-24 23:13:02.861089] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:30.490 [2024-07-24 23:13:02.861100] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.861105] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.861110] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13df460) 00:27:30.490 [2024-07-24 23:13:02.861116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.490 [2024-07-24 23:13:02.861130] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144ab10, cid 4, qid 0 00:27:30.490 [2024-07-24 23:13:02.861309] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:30.490 [2024-07-24 23:13:02.861315] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:30.490 [2024-07-24 23:13:02.861320] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.861324] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13df460): datao=0, datal=4096, cccid=4 00:27:30.490 [2024-07-24 23:13:02.861330] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x144ab10) on tqpair(0x13df460): expected_datao=0, payload_size=4096 00:27:30.490 [2024-07-24 23:13:02.861338] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.861343] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.861438] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.490 [2024-07-24 23:13:02.861445] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.490 [2024-07-24 23:13:02.861449] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.861454] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144ab10) on tqpair=0x13df460 00:27:30.490 [2024-07-24 23:13:02.861469] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:30.490 [2024-07-24 23:13:02.861492] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.861497] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.861502] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13df460) 00:27:30.490 [2024-07-24 23:13:02.861509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.490 [2024-07-24 23:13:02.861517] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.861521] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.861526] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13df460) 00:27:30.490 [2024-07-24 23:13:02.861532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.490 [2024-07-24 23:13:02.861547] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144ab10, cid 4, qid 0 00:27:30.490 [2024-07-24 23:13:02.861552] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144ac70, cid 5, qid 0 00:27:30.490 [2024-07-24 23:13:02.861678] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:30.490 [2024-07-24 23:13:02.861684] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:30.490 [2024-07-24 23:13:02.861689] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.861693] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13df460): datao=0, datal=1024, cccid=4 00:27:30.490 [2024-07-24 23:13:02.861699] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x144ab10) on tqpair(0x13df460): expected_datao=0, payload_size=1024 00:27:30.490 [2024-07-24 23:13:02.861707] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.861712] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.861726] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.490 [2024-07-24 23:13:02.861733] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.490 [2024-07-24 23:13:02.861737] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.861742] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144ac70) on tqpair=0x13df460 00:27:30.490 [2024-07-24 23:13:02.905724] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.490 [2024-07-24 23:13:02.905736] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.490 [2024-07-24 23:13:02.905745] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.905750] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144ab10) on tqpair=0x13df460 00:27:30.490 [2024-07-24 23:13:02.905763] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.905769] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.905773] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13df460) 00:27:30.490 [2024-07-24 23:13:02.905782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.490 [2024-07-24 23:13:02.905801] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144ab10, cid 4, qid 0 00:27:30.490 [2024-07-24 23:13:02.905977] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:30.490 [2024-07-24 23:13:02.905985] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:30.490 [2024-07-24 23:13:02.905989] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.905994] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13df460): datao=0, datal=3072, cccid=4 00:27:30.490 [2024-07-24 23:13:02.906000] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x144ab10) on tqpair(0x13df460): expected_datao=0, payload_size=3072 00:27:30.490 [2024-07-24 23:13:02.906008] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.906013] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.906125] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.490 [2024-07-24 23:13:02.906132] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.490 [2024-07-24 23:13:02.906136] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.906141] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144ab10) on tqpair=0x13df460 00:27:30.490 [2024-07-24 23:13:02.906152] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.906157] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.906161] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13df460) 00:27:30.490 [2024-07-24 23:13:02.906169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.490 [2024-07-24 23:13:02.906185] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144ab10, cid 4, qid 0 00:27:30.490 [2024-07-24 23:13:02.906280] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:30.490 [2024-07-24 23:13:02.906287] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:30.490 [2024-07-24 23:13:02.906292] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.906297] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13df460): datao=0, datal=8, cccid=4 00:27:30.490 [2024-07-24 23:13:02.906302] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x144ab10) on tqpair(0x13df460): expected_datao=0, payload_size=8 00:27:30.490 [2024-07-24 23:13:02.906310] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:30.490 [2024-07-24 23:13:02.906315] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:30.753 [2024-07-24 23:13:02.946868] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.753 [2024-07-24 23:13:02.946882] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.753 [2024-07-24 23:13:02.946888] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.753 [2024-07-24 23:13:02.946893] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144ab10) on tqpair=0x13df460 00:27:30.753 ===================================================== 00:27:30.753 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:30.753 ===================================================== 00:27:30.753 Controller Capabilities/Features 00:27:30.753 ================================ 00:27:30.753 Vendor ID: 0000 00:27:30.753 Subsystem Vendor ID: 0000 00:27:30.753 Serial Number: .................... 00:27:30.753 Model Number: ........................................ 00:27:30.753 Firmware Version: 24.01.1 00:27:30.753 Recommended Arb Burst: 0 00:27:30.753 IEEE OUI Identifier: 00 00 00 00:27:30.753 Multi-path I/O 00:27:30.753 May have multiple subsystem ports: No 00:27:30.753 May have multiple controllers: No 00:27:30.753 Associated with SR-IOV VF: No 00:27:30.753 Max Data Transfer Size: 131072 00:27:30.753 Max Number of Namespaces: 0 00:27:30.753 Max Number of I/O Queues: 1024 00:27:30.753 NVMe Specification Version (VS): 1.3 00:27:30.753 NVMe Specification Version (Identify): 1.3 00:27:30.753 Maximum Queue Entries: 128 00:27:30.753 Contiguous Queues Required: Yes 00:27:30.753 Arbitration Mechanisms Supported 00:27:30.753 Weighted Round Robin: Not Supported 00:27:30.753 Vendor Specific: Not Supported 00:27:30.753 Reset Timeout: 15000 ms 00:27:30.753 Doorbell Stride: 4 bytes 00:27:30.753 NVM Subsystem Reset: Not Supported 00:27:30.753 Command Sets Supported 00:27:30.753 NVM Command Set: Supported 00:27:30.753 Boot Partition: Not Supported 00:27:30.753 Memory Page Size Minimum: 4096 bytes 00:27:30.753 Memory Page Size Maximum: 4096 bytes 00:27:30.753 Persistent Memory Region: Not Supported 00:27:30.753 Optional Asynchronous Events Supported 00:27:30.753 Namespace Attribute Notices: Not Supported 00:27:30.753 Firmware Activation Notices: Not Supported 00:27:30.753 ANA Change Notices: Not Supported 00:27:30.753 PLE Aggregate Log Change Notices: Not Supported 00:27:30.753 LBA Status Info Alert Notices: Not Supported 00:27:30.753 EGE Aggregate Log Change Notices: Not Supported 00:27:30.753 Normal NVM Subsystem Shutdown event: Not Supported 00:27:30.753 Zone Descriptor Change Notices: Not Supported 00:27:30.753 Discovery Log Change Notices: Supported 00:27:30.753 Controller Attributes 00:27:30.753 128-bit Host Identifier: Not Supported 00:27:30.753 Non-Operational Permissive Mode: Not Supported 00:27:30.753 NVM Sets: Not Supported 00:27:30.753 Read Recovery Levels: Not Supported 00:27:30.753 Endurance Groups: Not Supported 00:27:30.753 Predictable Latency Mode: Not Supported 00:27:30.753 Traffic Based Keep ALive: Not Supported 00:27:30.753 Namespace Granularity: Not Supported 00:27:30.753 SQ Associations: Not Supported 00:27:30.753 UUID List: Not Supported 00:27:30.753 Multi-Domain Subsystem: Not Supported 00:27:30.753 Fixed Capacity Management: Not Supported 00:27:30.753 Variable Capacity Management: Not Supported 00:27:30.753 Delete Endurance Group: Not Supported 00:27:30.753 Delete NVM Set: Not Supported 00:27:30.753 Extended LBA Formats Supported: Not Supported 00:27:30.753 Flexible Data Placement Supported: Not Supported 00:27:30.753 00:27:30.753 Controller Memory Buffer Support 00:27:30.753 ================================ 00:27:30.753 Supported: No 00:27:30.753 00:27:30.753 Persistent Memory Region Support 00:27:30.753 ================================ 00:27:30.753 Supported: No 00:27:30.753 00:27:30.753 Admin Command Set Attributes 00:27:30.753 ============================ 00:27:30.753 Security Send/Receive: Not Supported 00:27:30.753 Format NVM: Not Supported 00:27:30.753 Firmware Activate/Download: Not Supported 00:27:30.753 Namespace Management: Not Supported 00:27:30.753 Device Self-Test: Not Supported 00:27:30.753 Directives: Not Supported 00:27:30.753 NVMe-MI: Not Supported 00:27:30.753 Virtualization Management: Not Supported 00:27:30.753 Doorbell Buffer Config: Not Supported 00:27:30.753 Get LBA Status Capability: Not Supported 00:27:30.753 Command & Feature Lockdown Capability: Not Supported 00:27:30.753 Abort Command Limit: 1 00:27:30.753 Async Event Request Limit: 4 00:27:30.753 Number of Firmware Slots: N/A 00:27:30.753 Firmware Slot 1 Read-Only: N/A 00:27:30.753 Firmware Activation Without Reset: N/A 00:27:30.753 Multiple Update Detection Support: N/A 00:27:30.753 Firmware Update Granularity: No Information Provided 00:27:30.753 Per-Namespace SMART Log: No 00:27:30.753 Asymmetric Namespace Access Log Page: Not Supported 00:27:30.753 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:30.753 Command Effects Log Page: Not Supported 00:27:30.753 Get Log Page Extended Data: Supported 00:27:30.753 Telemetry Log Pages: Not Supported 00:27:30.753 Persistent Event Log Pages: Not Supported 00:27:30.753 Supported Log Pages Log Page: May Support 00:27:30.753 Commands Supported & Effects Log Page: Not Supported 00:27:30.753 Feature Identifiers & Effects Log Page:May Support 00:27:30.753 NVMe-MI Commands & Effects Log Page: May Support 00:27:30.753 Data Area 4 for Telemetry Log: Not Supported 00:27:30.753 Error Log Page Entries Supported: 128 00:27:30.753 Keep Alive: Not Supported 00:27:30.753 00:27:30.753 NVM Command Set Attributes 00:27:30.753 ========================== 00:27:30.753 Submission Queue Entry Size 00:27:30.753 Max: 1 00:27:30.753 Min: 1 00:27:30.753 Completion Queue Entry Size 00:27:30.753 Max: 1 00:27:30.753 Min: 1 00:27:30.753 Number of Namespaces: 0 00:27:30.753 Compare Command: Not Supported 00:27:30.753 Write Uncorrectable Command: Not Supported 00:27:30.753 Dataset Management Command: Not Supported 00:27:30.753 Write Zeroes Command: Not Supported 00:27:30.753 Set Features Save Field: Not Supported 00:27:30.753 Reservations: Not Supported 00:27:30.753 Timestamp: Not Supported 00:27:30.753 Copy: Not Supported 00:27:30.753 Volatile Write Cache: Not Present 00:27:30.753 Atomic Write Unit (Normal): 1 00:27:30.753 Atomic Write Unit (PFail): 1 00:27:30.753 Atomic Compare & Write Unit: 1 00:27:30.753 Fused Compare & Write: Supported 00:27:30.753 Scatter-Gather List 00:27:30.753 SGL Command Set: Supported 00:27:30.753 SGL Keyed: Supported 00:27:30.753 SGL Bit Bucket Descriptor: Not Supported 00:27:30.753 SGL Metadata Pointer: Not Supported 00:27:30.753 Oversized SGL: Not Supported 00:27:30.753 SGL Metadata Address: Not Supported 00:27:30.753 SGL Offset: Supported 00:27:30.753 Transport SGL Data Block: Not Supported 00:27:30.753 Replay Protected Memory Block: Not Supported 00:27:30.753 00:27:30.753 Firmware Slot Information 00:27:30.753 ========================= 00:27:30.753 Active slot: 0 00:27:30.753 00:27:30.754 00:27:30.754 Error Log 00:27:30.754 ========= 00:27:30.754 00:27:30.754 Active Namespaces 00:27:30.754 ================= 00:27:30.754 Discovery Log Page 00:27:30.754 ================== 00:27:30.754 Generation Counter: 2 00:27:30.754 Number of Records: 2 00:27:30.754 Record Format: 0 00:27:30.754 00:27:30.754 Discovery Log Entry 0 00:27:30.754 ---------------------- 00:27:30.754 Transport Type: 3 (TCP) 00:27:30.754 Address Family: 1 (IPv4) 00:27:30.754 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:30.754 Entry Flags: 00:27:30.754 Duplicate Returned Information: 1 00:27:30.754 Explicit Persistent Connection Support for Discovery: 1 00:27:30.754 Transport Requirements: 00:27:30.754 Secure Channel: Not Required 00:27:30.754 Port ID: 0 (0x0000) 00:27:30.754 Controller ID: 65535 (0xffff) 00:27:30.754 Admin Max SQ Size: 128 00:27:30.754 Transport Service Identifier: 4420 00:27:30.754 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:30.754 Transport Address: 10.0.0.2 00:27:30.754 Discovery Log Entry 1 00:27:30.754 ---------------------- 00:27:30.754 Transport Type: 3 (TCP) 00:27:30.754 Address Family: 1 (IPv4) 00:27:30.754 Subsystem Type: 2 (NVM Subsystem) 00:27:30.754 Entry Flags: 00:27:30.754 Duplicate Returned Information: 0 00:27:30.754 Explicit Persistent Connection Support for Discovery: 0 00:27:30.754 Transport Requirements: 00:27:30.754 Secure Channel: Not Required 00:27:30.754 Port ID: 0 (0x0000) 00:27:30.754 Controller ID: 65535 (0xffff) 00:27:30.754 Admin Max SQ Size: 128 00:27:30.754 Transport Service Identifier: 4420 00:27:30.754 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:30.754 Transport Address: 10.0.0.2 [2024-07-24 23:13:02.946983] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:30.754 [2024-07-24 23:13:02.946998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.754 [2024-07-24 23:13:02.947008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.754 [2024-07-24 23:13:02.947015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.754 [2024-07-24 23:13:02.947022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.754 [2024-07-24 23:13:02.947031] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.947037] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.947041] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df460) 00:27:30.754 [2024-07-24 23:13:02.947050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.754 [2024-07-24 23:13:02.947066] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a9b0, cid 3, qid 0 00:27:30.754 [2024-07-24 23:13:02.947148] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.754 [2024-07-24 23:13:02.947155] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.754 [2024-07-24 23:13:02.947160] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.947165] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144a9b0) on tqpair=0x13df460 00:27:30.754 [2024-07-24 23:13:02.947174] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.947179] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.947184] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df460) 00:27:30.754 [2024-07-24 23:13:02.947191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.754 [2024-07-24 23:13:02.947206] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a9b0, cid 3, qid 0 00:27:30.754 [2024-07-24 23:13:02.947298] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.754 [2024-07-24 23:13:02.947305] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.754 [2024-07-24 23:13:02.947310] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.947315] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144a9b0) on tqpair=0x13df460 00:27:30.754 [2024-07-24 23:13:02.947322] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:30.754 [2024-07-24 23:13:02.947328] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:30.754 [2024-07-24 23:13:02.947339] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.947344] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.947349] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df460) 00:27:30.754 [2024-07-24 23:13:02.947356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.754 [2024-07-24 23:13:02.947368] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a9b0, cid 3, qid 0 00:27:30.754 [2024-07-24 23:13:02.947533] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.754 [2024-07-24 23:13:02.947540] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.754 [2024-07-24 23:13:02.947544] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.947549] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144a9b0) on tqpair=0x13df460 00:27:30.754 [2024-07-24 23:13:02.947562] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.947567] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.947571] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df460) 00:27:30.754 [2024-07-24 23:13:02.947580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.754 [2024-07-24 23:13:02.947592] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a9b0, cid 3, qid 0 00:27:30.754 [2024-07-24 23:13:02.947679] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.754 [2024-07-24 23:13:02.947687] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.754 [2024-07-24 23:13:02.947692] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.947696] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144a9b0) on tqpair=0x13df460 00:27:30.754 [2024-07-24 23:13:02.947708] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.947713] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.947723] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df460) 00:27:30.754 [2024-07-24 23:13:02.947731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.754 [2024-07-24 23:13:02.947742] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a9b0, cid 3, qid 0 00:27:30.754 [2024-07-24 23:13:02.947827] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.754 [2024-07-24 23:13:02.947834] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.754 [2024-07-24 23:13:02.947838] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.947843] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144a9b0) on tqpair=0x13df460 00:27:30.754 [2024-07-24 23:13:02.947855] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.947861] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.947866] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df460) 00:27:30.754 [2024-07-24 23:13:02.947873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.754 [2024-07-24 23:13:02.947884] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a9b0, cid 3, qid 0 00:27:30.754 [2024-07-24 23:13:02.947971] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.754 [2024-07-24 23:13:02.947978] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.754 [2024-07-24 23:13:02.947983] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.947988] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144a9b0) on tqpair=0x13df460 00:27:30.754 [2024-07-24 23:13:02.947999] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.948005] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.948009] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df460) 00:27:30.754 [2024-07-24 23:13:02.948016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.754 [2024-07-24 23:13:02.948028] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a9b0, cid 3, qid 0 00:27:30.754 [2024-07-24 23:13:02.948111] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.754 [2024-07-24 23:13:02.948118] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.754 [2024-07-24 23:13:02.948123] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.948128] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144a9b0) on tqpair=0x13df460 00:27:30.754 [2024-07-24 23:13:02.948139] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.948144] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.754 [2024-07-24 23:13:02.948149] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df460) 00:27:30.754 [2024-07-24 23:13:02.948158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.754 [2024-07-24 23:13:02.948169] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a9b0, cid 3, qid 0 00:27:30.754 [2024-07-24 23:13:02.948250] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.754 [2024-07-24 23:13:02.948257] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.754 [2024-07-24 23:13:02.948262] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:02.948267] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144a9b0) on tqpair=0x13df460 00:27:30.755 [2024-07-24 23:13:02.948278] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:02.948284] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:02.948288] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df460) 00:27:30.755 [2024-07-24 23:13:02.948295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.755 [2024-07-24 23:13:02.948307] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a9b0, cid 3, qid 0 00:27:30.755 [2024-07-24 23:13:02.948385] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.755 [2024-07-24 23:13:02.948392] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.755 [2024-07-24 23:13:02.948397] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:02.948402] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144a9b0) on tqpair=0x13df460 00:27:30.755 [2024-07-24 23:13:02.948413] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:02.948418] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:02.948423] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df460) 00:27:30.755 [2024-07-24 23:13:02.948430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.755 [2024-07-24 23:13:02.948441] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a9b0, cid 3, qid 0 00:27:30.755 [2024-07-24 23:13:02.948524] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.755 [2024-07-24 23:13:02.948531] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.755 [2024-07-24 23:13:02.948536] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:02.948541] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144a9b0) on tqpair=0x13df460 00:27:30.755 [2024-07-24 23:13:02.948553] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:02.948558] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:02.948563] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df460) 00:27:30.755 [2024-07-24 23:13:02.948570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.755 [2024-07-24 23:13:02.948581] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a9b0, cid 3, qid 0 00:27:30.755 [2024-07-24 23:13:02.948661] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.755 [2024-07-24 23:13:02.948669] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.755 [2024-07-24 23:13:02.948673] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:02.948678] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144a9b0) on tqpair=0x13df460 00:27:30.755 [2024-07-24 23:13:02.948690] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:02.948695] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:02.948700] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df460) 00:27:30.755 [2024-07-24 23:13:02.948707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.755 [2024-07-24 23:13:02.952729] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a9b0, cid 3, qid 0 00:27:30.755 [2024-07-24 23:13:02.952814] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.755 [2024-07-24 23:13:02.952821] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.755 [2024-07-24 23:13:02.952827] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:02.952831] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x144a9b0) on tqpair=0x13df460 00:27:30.755 [2024-07-24 23:13:02.952842] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:27:30.755 00:27:30.755 23:13:02 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:30.755 [2024-07-24 23:13:02.988703] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:30.755 [2024-07-24 23:13:02.988747] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3343392 ] 00:27:30.755 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.755 [2024-07-24 23:13:03.020760] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:30.755 [2024-07-24 23:13:03.020800] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:30.755 [2024-07-24 23:13:03.020806] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:30.755 [2024-07-24 23:13:03.020819] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:30.755 [2024-07-24 23:13:03.020827] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:30.755 [2024-07-24 23:13:03.021134] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:30.755 [2024-07-24 23:13:03.021160] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8bd460 0 00:27:30.755 [2024-07-24 23:13:03.035724] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:30.755 [2024-07-24 23:13:03.035737] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:30.755 [2024-07-24 23:13:03.035742] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:30.755 [2024-07-24 23:13:03.035746] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:30.755 [2024-07-24 23:13:03.035778] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:03.035784] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:03.035789] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8bd460) 00:27:30.755 [2024-07-24 23:13:03.035800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:30.755 [2024-07-24 23:13:03.035817] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928590, cid 0, qid 0 00:27:30.755 [2024-07-24 23:13:03.043723] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.755 [2024-07-24 23:13:03.043733] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.755 [2024-07-24 23:13:03.043737] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:03.043742] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928590) on tqpair=0x8bd460 00:27:30.755 [2024-07-24 23:13:03.043755] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:30.755 [2024-07-24 23:13:03.043761] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:30.755 [2024-07-24 23:13:03.043771] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:30.755 [2024-07-24 23:13:03.043781] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:03.043786] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:03.043791] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8bd460) 00:27:30.755 [2024-07-24 23:13:03.043798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.755 [2024-07-24 23:13:03.043812] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928590, cid 0, qid 0 00:27:30.755 [2024-07-24 23:13:03.043976] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.755 [2024-07-24 23:13:03.043983] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.755 [2024-07-24 23:13:03.043987] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:03.043992] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928590) on tqpair=0x8bd460 00:27:30.755 [2024-07-24 23:13:03.043998] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:30.755 [2024-07-24 23:13:03.044008] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:30.755 [2024-07-24 23:13:03.044016] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:03.044020] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:03.044025] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8bd460) 00:27:30.755 [2024-07-24 23:13:03.044032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.755 [2024-07-24 23:13:03.044044] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928590, cid 0, qid 0 00:27:30.755 [2024-07-24 23:13:03.044138] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.755 [2024-07-24 23:13:03.044144] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.755 [2024-07-24 23:13:03.044149] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:03.044154] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928590) on tqpair=0x8bd460 00:27:30.755 [2024-07-24 23:13:03.044159] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:30.755 [2024-07-24 23:13:03.044169] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:30.755 [2024-07-24 23:13:03.044177] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:03.044181] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:03.044186] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8bd460) 00:27:30.755 [2024-07-24 23:13:03.044193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.755 [2024-07-24 23:13:03.044204] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928590, cid 0, qid 0 00:27:30.755 [2024-07-24 23:13:03.044292] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.755 [2024-07-24 23:13:03.044298] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.755 [2024-07-24 23:13:03.044303] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.755 [2024-07-24 23:13:03.044308] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928590) on tqpair=0x8bd460 00:27:30.755 [2024-07-24 23:13:03.044313] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:30.755 [2024-07-24 23:13:03.044324] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.044331] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.044335] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8bd460) 00:27:30.756 [2024-07-24 23:13:03.044342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.756 [2024-07-24 23:13:03.044354] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928590, cid 0, qid 0 00:27:30.756 [2024-07-24 23:13:03.044443] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.756 [2024-07-24 23:13:03.044450] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.756 [2024-07-24 23:13:03.044455] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.044459] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928590) on tqpair=0x8bd460 00:27:30.756 [2024-07-24 23:13:03.044465] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:30.756 [2024-07-24 23:13:03.044471] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:30.756 [2024-07-24 23:13:03.044480] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:30.756 [2024-07-24 23:13:03.044586] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:30.756 [2024-07-24 23:13:03.044591] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:30.756 [2024-07-24 23:13:03.044599] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.044604] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.044609] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8bd460) 00:27:30.756 [2024-07-24 23:13:03.044615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.756 [2024-07-24 23:13:03.044627] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928590, cid 0, qid 0 00:27:30.756 [2024-07-24 23:13:03.044720] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.756 [2024-07-24 23:13:03.044727] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.756 [2024-07-24 23:13:03.044731] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.044736] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928590) on tqpair=0x8bd460 00:27:30.756 [2024-07-24 23:13:03.044741] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:30.756 [2024-07-24 23:13:03.044752] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.044757] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.044761] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8bd460) 00:27:30.756 [2024-07-24 23:13:03.044768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.756 [2024-07-24 23:13:03.044780] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928590, cid 0, qid 0 00:27:30.756 [2024-07-24 23:13:03.044874] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.756 [2024-07-24 23:13:03.044880] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.756 [2024-07-24 23:13:03.044885] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.044889] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928590) on tqpair=0x8bd460 00:27:30.756 [2024-07-24 23:13:03.044895] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:30.756 [2024-07-24 23:13:03.044903] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:30.756 [2024-07-24 23:13:03.044912] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:30.756 [2024-07-24 23:13:03.044921] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:30.756 [2024-07-24 23:13:03.044930] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.044935] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.044940] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8bd460) 00:27:30.756 [2024-07-24 23:13:03.044947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.756 [2024-07-24 23:13:03.044959] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928590, cid 0, qid 0 00:27:30.756 [2024-07-24 23:13:03.045092] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:30.756 [2024-07-24 23:13:03.045099] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:30.756 [2024-07-24 23:13:03.045104] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.045108] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8bd460): datao=0, datal=4096, cccid=0 00:27:30.756 [2024-07-24 23:13:03.045114] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x928590) on tqpair(0x8bd460): expected_datao=0, payload_size=4096 00:27:30.756 [2024-07-24 23:13:03.045225] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.045230] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.086871] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.756 [2024-07-24 23:13:03.086882] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.756 [2024-07-24 23:13:03.086887] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.086892] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928590) on tqpair=0x8bd460 00:27:30.756 [2024-07-24 23:13:03.086901] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:30.756 [2024-07-24 23:13:03.086907] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:30.756 [2024-07-24 23:13:03.086913] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:30.756 [2024-07-24 23:13:03.086918] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:30.756 [2024-07-24 23:13:03.086924] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:30.756 [2024-07-24 23:13:03.086930] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:30.756 [2024-07-24 23:13:03.086943] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:30.756 [2024-07-24 23:13:03.086951] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.086956] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.086960] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8bd460) 00:27:30.756 [2024-07-24 23:13:03.086969] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:30.756 [2024-07-24 23:13:03.086983] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928590, cid 0, qid 0 00:27:30.756 [2024-07-24 23:13:03.087068] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.756 [2024-07-24 23:13:03.087075] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.756 [2024-07-24 23:13:03.087081] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.087086] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928590) on tqpair=0x8bd460 00:27:30.756 [2024-07-24 23:13:03.087093] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.087098] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.087103] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8bd460) 00:27:30.756 [2024-07-24 23:13:03.087109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.756 [2024-07-24 23:13:03.087116] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.087121] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.087126] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8bd460) 00:27:30.756 [2024-07-24 23:13:03.087132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.756 [2024-07-24 23:13:03.087138] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.087143] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.087148] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8bd460) 00:27:30.756 [2024-07-24 23:13:03.087154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.756 [2024-07-24 23:13:03.087160] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.087165] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.087169] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:30.756 [2024-07-24 23:13:03.087176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.756 [2024-07-24 23:13:03.087182] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:30.756 [2024-07-24 23:13:03.087194] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:30.756 [2024-07-24 23:13:03.087201] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.087205] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.756 [2024-07-24 23:13:03.087210] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8bd460) 00:27:30.756 [2024-07-24 23:13:03.087217] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.756 [2024-07-24 23:13:03.087230] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928590, cid 0, qid 0 00:27:30.756 [2024-07-24 23:13:03.087236] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9286f0, cid 1, qid 0 00:27:30.756 [2024-07-24 23:13:03.087241] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928850, cid 2, qid 0 00:27:30.756 [2024-07-24 23:13:03.087246] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:30.756 [2024-07-24 23:13:03.087252] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928b10, cid 4, qid 0 00:27:30.756 [2024-07-24 23:13:03.087369] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.756 [2024-07-24 23:13:03.087376] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.756 [2024-07-24 23:13:03.087381] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.757 [2024-07-24 23:13:03.087385] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928b10) on tqpair=0x8bd460 00:27:30.757 [2024-07-24 23:13:03.087391] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:30.757 [2024-07-24 23:13:03.087399] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:30.757 [2024-07-24 23:13:03.087409] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:30.757 [2024-07-24 23:13:03.087418] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:30.757 [2024-07-24 23:13:03.087425] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.757 [2024-07-24 23:13:03.087430] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.757 [2024-07-24 23:13:03.087434] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8bd460) 00:27:30.757 [2024-07-24 23:13:03.087441] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:30.757 [2024-07-24 23:13:03.087453] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928b10, cid 4, qid 0 00:27:30.757 [2024-07-24 23:13:03.087540] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.757 [2024-07-24 23:13:03.087547] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.757 [2024-07-24 23:13:03.087551] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.757 [2024-07-24 23:13:03.087556] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928b10) on tqpair=0x8bd460 00:27:30.757 [2024-07-24 23:13:03.087607] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:30.757 [2024-07-24 23:13:03.087618] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:30.757 [2024-07-24 23:13:03.087626] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.757 [2024-07-24 23:13:03.087631] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.757 [2024-07-24 23:13:03.087635] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8bd460) 00:27:30.757 [2024-07-24 23:13:03.087642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.757 [2024-07-24 23:13:03.087654] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928b10, cid 4, qid 0 00:27:30.757 [2024-07-24 23:13:03.091725] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:30.757 [2024-07-24 23:13:03.091733] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:30.757 [2024-07-24 23:13:03.091738] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:30.757 [2024-07-24 23:13:03.091743] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8bd460): datao=0, datal=4096, cccid=4 00:27:30.757 [2024-07-24 23:13:03.091748] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x928b10) on tqpair(0x8bd460): expected_datao=0, payload_size=4096 00:27:30.757 [2024-07-24 23:13:03.091757] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:30.757 [2024-07-24 23:13:03.091761] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:30.757 [2024-07-24 23:13:03.130724] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.757 [2024-07-24 23:13:03.130733] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.757 [2024-07-24 23:13:03.130738] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.757 [2024-07-24 23:13:03.130743] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928b10) on tqpair=0x8bd460 00:27:30.757 [2024-07-24 23:13:03.130757] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:30.757 [2024-07-24 23:13:03.130772] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:30.757 [2024-07-24 23:13:03.130783] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:30.757 [2024-07-24 23:13:03.130793] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.757 [2024-07-24 23:13:03.130798] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.757 [2024-07-24 23:13:03.130802] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8bd460) 00:27:30.757 [2024-07-24 23:13:03.130810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.757 [2024-07-24 23:13:03.130826] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928b10, cid 4, qid 0 00:27:30.757 [2024-07-24 23:13:03.130994] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:30.757 [2024-07-24 23:13:03.131001] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:30.757 [2024-07-24 23:13:03.131005] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:30.757 [2024-07-24 23:13:03.131010] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8bd460): datao=0, datal=4096, cccid=4 00:27:30.757 [2024-07-24 23:13:03.131016] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x928b10) on tqpair(0x8bd460): expected_datao=0, payload_size=4096 00:27:30.757 [2024-07-24 23:13:03.131121] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:30.757 [2024-07-24 23:13:03.131126] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:30.757 [2024-07-24 23:13:03.171943] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:30.757 [2024-07-24 23:13:03.171953] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:30.757 [2024-07-24 23:13:03.171958] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:30.757 [2024-07-24 23:13:03.171963] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928b10) on tqpair=0x8bd460 00:27:30.757 [2024-07-24 23:13:03.171978] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:30.757 [2024-07-24 23:13:03.171989] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:30.757 [2024-07-24 23:13:03.171997] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:30.757 [2024-07-24 23:13:03.172001] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:30.757 [2024-07-24 23:13:03.172006] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8bd460) 00:27:30.757 [2024-07-24 23:13:03.172013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.757 [2024-07-24 23:13:03.172027] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928b10, cid 4, qid 0 00:27:30.757 [2024-07-24 23:13:03.172155] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:30.757 [2024-07-24 23:13:03.172162] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:30.757 [2024-07-24 23:13:03.172167] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:30.757 [2024-07-24 23:13:03.172171] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8bd460): datao=0, datal=4096, cccid=4 00:27:30.757 [2024-07-24 23:13:03.172177] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x928b10) on tqpair(0x8bd460): expected_datao=0, payload_size=4096 00:27:30.757 [2024-07-24 23:13:03.172185] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:30.757 [2024-07-24 23:13:03.172190] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:31.023 [2024-07-24 23:13:03.212900] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.023 [2024-07-24 23:13:03.212911] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.023 [2024-07-24 23:13:03.212915] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.023 [2024-07-24 23:13:03.212920] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928b10) on tqpair=0x8bd460 00:27:31.023 [2024-07-24 23:13:03.212930] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:31.023 [2024-07-24 23:13:03.212944] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:31.023 [2024-07-24 23:13:03.212953] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:31.023 [2024-07-24 23:13:03.212960] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:31.023 [2024-07-24 23:13:03.212967] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:31.023 [2024-07-24 23:13:03.212973] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:31.023 [2024-07-24 23:13:03.212979] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:31.023 [2024-07-24 23:13:03.212985] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:31.023 [2024-07-24 23:13:03.213001] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.023 [2024-07-24 23:13:03.213006] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.023 [2024-07-24 23:13:03.213011] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8bd460) 00:27:31.023 [2024-07-24 23:13:03.213018] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.023 [2024-07-24 23:13:03.213026] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.023 [2024-07-24 23:13:03.213030] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.023 [2024-07-24 23:13:03.213035] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8bd460) 00:27:31.023 [2024-07-24 23:13:03.213041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.023 [2024-07-24 23:13:03.213057] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928b10, cid 4, qid 0 00:27:31.023 [2024-07-24 23:13:03.213063] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928c70, cid 5, qid 0 00:27:31.023 [2024-07-24 23:13:03.213167] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.023 [2024-07-24 23:13:03.213174] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.023 [2024-07-24 23:13:03.213178] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.023 [2024-07-24 23:13:03.213183] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928b10) on tqpair=0x8bd460 00:27:31.023 [2024-07-24 23:13:03.213190] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.023 [2024-07-24 23:13:03.213196] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.023 [2024-07-24 23:13:03.213200] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.023 [2024-07-24 23:13:03.213205] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928c70) on tqpair=0x8bd460 00:27:31.023 [2024-07-24 23:13:03.213216] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.023 [2024-07-24 23:13:03.213221] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.023 [2024-07-24 23:13:03.213225] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8bd460) 00:27:31.023 [2024-07-24 23:13:03.213232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.023 [2024-07-24 23:13:03.213243] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928c70, cid 5, qid 0 00:27:31.023 [2024-07-24 23:13:03.213416] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.023 [2024-07-24 23:13:03.213422] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.023 [2024-07-24 23:13:03.213426] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.023 [2024-07-24 23:13:03.213433] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928c70) on tqpair=0x8bd460 00:27:31.023 [2024-07-24 23:13:03.213444] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.023 [2024-07-24 23:13:03.213449] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.023 [2024-07-24 23:13:03.213453] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8bd460) 00:27:31.023 [2024-07-24 23:13:03.213460] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.023 [2024-07-24 23:13:03.213471] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928c70, cid 5, qid 0 00:27:31.023 [2024-07-24 23:13:03.213557] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.023 [2024-07-24 23:13:03.213564] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.023 [2024-07-24 23:13:03.213568] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.023 [2024-07-24 23:13:03.213573] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928c70) on tqpair=0x8bd460 00:27:31.023 [2024-07-24 23:13:03.213583] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.023 [2024-07-24 23:13:03.213588] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.023 [2024-07-24 23:13:03.213593] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8bd460) 00:27:31.023 [2024-07-24 23:13:03.213599] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.023 [2024-07-24 23:13:03.213611] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928c70, cid 5, qid 0 00:27:31.023 [2024-07-24 23:13:03.213775] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.023 [2024-07-24 23:13:03.213782] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.023 [2024-07-24 23:13:03.213786] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.023 [2024-07-24 23:13:03.213792] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928c70) on tqpair=0x8bd460 00:27:31.023 [2024-07-24 23:13:03.213804] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.023 [2024-07-24 23:13:03.213809] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.023 [2024-07-24 23:13:03.213814] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8bd460) 00:27:31.023 [2024-07-24 23:13:03.213821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.023 [2024-07-24 23:13:03.213828] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.023 [2024-07-24 23:13:03.213833] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.024 [2024-07-24 23:13:03.213838] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8bd460) 00:27:31.024 [2024-07-24 23:13:03.213844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.024 [2024-07-24 23:13:03.213852] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.024 [2024-07-24 23:13:03.213856] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.024 [2024-07-24 23:13:03.213861] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x8bd460) 00:27:31.024 [2024-07-24 23:13:03.213867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.024 [2024-07-24 23:13:03.213875] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.024 [2024-07-24 23:13:03.213880] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.024 [2024-07-24 23:13:03.213884] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8bd460) 00:27:31.024 [2024-07-24 23:13:03.213891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.024 [2024-07-24 23:13:03.213906] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928c70, cid 5, qid 0 00:27:31.024 [2024-07-24 23:13:03.213912] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928b10, cid 4, qid 0 00:27:31.024 [2024-07-24 23:13:03.213917] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928dd0, cid 6, qid 0 00:27:31.024 [2024-07-24 23:13:03.213922] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928f30, cid 7, qid 0 00:27:31.024 [2024-07-24 23:13:03.214157] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:31.024 [2024-07-24 23:13:03.214164] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:31.024 [2024-07-24 23:13:03.214168] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:31.024 [2024-07-24 23:13:03.214173] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8bd460): datao=0, datal=8192, cccid=5 00:27:31.024 [2024-07-24 23:13:03.214178] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x928c70) on tqpair(0x8bd460): expected_datao=0, payload_size=8192 00:27:31.024 [2024-07-24 23:13:03.214367] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:31.024 [2024-07-24 23:13:03.214372] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:31.024 [2024-07-24 23:13:03.214378] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:31.024 [2024-07-24 23:13:03.214384] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:31.024 [2024-07-24 23:13:03.214388] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:31.024 [2024-07-24 23:13:03.214393] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8bd460): datao=0, datal=512, cccid=4 00:27:31.024 [2024-07-24 23:13:03.214398] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x928b10) on tqpair(0x8bd460): expected_datao=0, payload_size=512 00:27:31.024 [2024-07-24 23:13:03.214406] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:31.024 [2024-07-24 23:13:03.214411] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:31.024 [2024-07-24 23:13:03.214417] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:31.024 [2024-07-24 23:13:03.214423] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:31.024 [2024-07-24 23:13:03.214427] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:31.024 [2024-07-24 23:13:03.214431] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8bd460): datao=0, datal=512, cccid=6 00:27:31.024 [2024-07-24 23:13:03.214437] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x928dd0) on tqpair(0x8bd460): expected_datao=0, payload_size=512 00:27:31.024 [2024-07-24 23:13:03.214445] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:31.024 [2024-07-24 23:13:03.214449] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:31.024 [2024-07-24 23:13:03.214455] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:31.024 [2024-07-24 23:13:03.214461] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:31.024 [2024-07-24 23:13:03.214465] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:31.024 [2024-07-24 23:13:03.214470] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8bd460): datao=0, datal=4096, cccid=7 00:27:31.024 [2024-07-24 23:13:03.214475] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x928f30) on tqpair(0x8bd460): expected_datao=0, payload_size=4096 00:27:31.024 [2024-07-24 23:13:03.214483] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:31.024 [2024-07-24 23:13:03.214488] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:31.024 [2024-07-24 23:13:03.214503] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.024 [2024-07-24 23:13:03.214509] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.024 [2024-07-24 23:13:03.214513] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.024 [2024-07-24 23:13:03.214518] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928c70) on tqpair=0x8bd460 00:27:31.024 [2024-07-24 23:13:03.214532] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.024 [2024-07-24 23:13:03.214538] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.024 [2024-07-24 23:13:03.214543] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.024 [2024-07-24 23:13:03.214547] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928b10) on tqpair=0x8bd460 00:27:31.024 [2024-07-24 23:13:03.214556] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.024 [2024-07-24 23:13:03.214563] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.024 [2024-07-24 23:13:03.214567] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.024 [2024-07-24 23:13:03.214571] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928dd0) on tqpair=0x8bd460 00:27:31.024 [2024-07-24 23:13:03.214579] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.024 [2024-07-24 23:13:03.214585] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.024 [2024-07-24 23:13:03.214589] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.024 [2024-07-24 23:13:03.214594] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928f30) on tqpair=0x8bd460 00:27:31.024 ===================================================== 00:27:31.024 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:31.024 ===================================================== 00:27:31.024 Controller Capabilities/Features 00:27:31.024 ================================ 00:27:31.024 Vendor ID: 8086 00:27:31.024 Subsystem Vendor ID: 8086 00:27:31.024 Serial Number: SPDK00000000000001 00:27:31.024 Model Number: SPDK bdev Controller 00:27:31.024 Firmware Version: 24.01.1 00:27:31.024 Recommended Arb Burst: 6 00:27:31.024 IEEE OUI Identifier: e4 d2 5c 00:27:31.024 Multi-path I/O 00:27:31.024 May have multiple subsystem ports: Yes 00:27:31.024 May have multiple controllers: Yes 00:27:31.024 Associated with SR-IOV VF: No 00:27:31.024 Max Data Transfer Size: 131072 00:27:31.024 Max Number of Namespaces: 32 00:27:31.024 Max Number of I/O Queues: 127 00:27:31.025 NVMe Specification Version (VS): 1.3 00:27:31.025 NVMe Specification Version (Identify): 1.3 00:27:31.025 Maximum Queue Entries: 128 00:27:31.025 Contiguous Queues Required: Yes 00:27:31.025 Arbitration Mechanisms Supported 00:27:31.025 Weighted Round Robin: Not Supported 00:27:31.025 Vendor Specific: Not Supported 00:27:31.025 Reset Timeout: 15000 ms 00:27:31.025 Doorbell Stride: 4 bytes 00:27:31.025 NVM Subsystem Reset: Not Supported 00:27:31.025 Command Sets Supported 00:27:31.025 NVM Command Set: Supported 00:27:31.025 Boot Partition: Not Supported 00:27:31.025 Memory Page Size Minimum: 4096 bytes 00:27:31.025 Memory Page Size Maximum: 4096 bytes 00:27:31.025 Persistent Memory Region: Not Supported 00:27:31.025 Optional Asynchronous Events Supported 00:27:31.025 Namespace Attribute Notices: Supported 00:27:31.025 Firmware Activation Notices: Not Supported 00:27:31.025 ANA Change Notices: Not Supported 00:27:31.025 PLE Aggregate Log Change Notices: Not Supported 00:27:31.025 LBA Status Info Alert Notices: Not Supported 00:27:31.025 EGE Aggregate Log Change Notices: Not Supported 00:27:31.025 Normal NVM Subsystem Shutdown event: Not Supported 00:27:31.025 Zone Descriptor Change Notices: Not Supported 00:27:31.025 Discovery Log Change Notices: Not Supported 00:27:31.025 Controller Attributes 00:27:31.025 128-bit Host Identifier: Supported 00:27:31.025 Non-Operational Permissive Mode: Not Supported 00:27:31.025 NVM Sets: Not Supported 00:27:31.025 Read Recovery Levels: Not Supported 00:27:31.025 Endurance Groups: Not Supported 00:27:31.025 Predictable Latency Mode: Not Supported 00:27:31.025 Traffic Based Keep ALive: Not Supported 00:27:31.025 Namespace Granularity: Not Supported 00:27:31.025 SQ Associations: Not Supported 00:27:31.025 UUID List: Not Supported 00:27:31.025 Multi-Domain Subsystem: Not Supported 00:27:31.025 Fixed Capacity Management: Not Supported 00:27:31.025 Variable Capacity Management: Not Supported 00:27:31.025 Delete Endurance Group: Not Supported 00:27:31.025 Delete NVM Set: Not Supported 00:27:31.025 Extended LBA Formats Supported: Not Supported 00:27:31.025 Flexible Data Placement Supported: Not Supported 00:27:31.025 00:27:31.025 Controller Memory Buffer Support 00:27:31.025 ================================ 00:27:31.025 Supported: No 00:27:31.025 00:27:31.025 Persistent Memory Region Support 00:27:31.025 ================================ 00:27:31.025 Supported: No 00:27:31.025 00:27:31.025 Admin Command Set Attributes 00:27:31.025 ============================ 00:27:31.025 Security Send/Receive: Not Supported 00:27:31.025 Format NVM: Not Supported 00:27:31.025 Firmware Activate/Download: Not Supported 00:27:31.025 Namespace Management: Not Supported 00:27:31.025 Device Self-Test: Not Supported 00:27:31.025 Directives: Not Supported 00:27:31.025 NVMe-MI: Not Supported 00:27:31.025 Virtualization Management: Not Supported 00:27:31.025 Doorbell Buffer Config: Not Supported 00:27:31.025 Get LBA Status Capability: Not Supported 00:27:31.025 Command & Feature Lockdown Capability: Not Supported 00:27:31.025 Abort Command Limit: 4 00:27:31.025 Async Event Request Limit: 4 00:27:31.025 Number of Firmware Slots: N/A 00:27:31.025 Firmware Slot 1 Read-Only: N/A 00:27:31.025 Firmware Activation Without Reset: N/A 00:27:31.025 Multiple Update Detection Support: N/A 00:27:31.025 Firmware Update Granularity: No Information Provided 00:27:31.025 Per-Namespace SMART Log: No 00:27:31.025 Asymmetric Namespace Access Log Page: Not Supported 00:27:31.025 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:31.025 Command Effects Log Page: Supported 00:27:31.025 Get Log Page Extended Data: Supported 00:27:31.025 Telemetry Log Pages: Not Supported 00:27:31.025 Persistent Event Log Pages: Not Supported 00:27:31.025 Supported Log Pages Log Page: May Support 00:27:31.025 Commands Supported & Effects Log Page: Not Supported 00:27:31.025 Feature Identifiers & Effects Log Page:May Support 00:27:31.025 NVMe-MI Commands & Effects Log Page: May Support 00:27:31.025 Data Area 4 for Telemetry Log: Not Supported 00:27:31.025 Error Log Page Entries Supported: 128 00:27:31.025 Keep Alive: Supported 00:27:31.025 Keep Alive Granularity: 10000 ms 00:27:31.025 00:27:31.025 NVM Command Set Attributes 00:27:31.025 ========================== 00:27:31.025 Submission Queue Entry Size 00:27:31.025 Max: 64 00:27:31.025 Min: 64 00:27:31.025 Completion Queue Entry Size 00:27:31.025 Max: 16 00:27:31.025 Min: 16 00:27:31.025 Number of Namespaces: 32 00:27:31.025 Compare Command: Supported 00:27:31.025 Write Uncorrectable Command: Not Supported 00:27:31.025 Dataset Management Command: Supported 00:27:31.025 Write Zeroes Command: Supported 00:27:31.025 Set Features Save Field: Not Supported 00:27:31.025 Reservations: Supported 00:27:31.025 Timestamp: Not Supported 00:27:31.025 Copy: Supported 00:27:31.025 Volatile Write Cache: Present 00:27:31.025 Atomic Write Unit (Normal): 1 00:27:31.025 Atomic Write Unit (PFail): 1 00:27:31.025 Atomic Compare & Write Unit: 1 00:27:31.025 Fused Compare & Write: Supported 00:27:31.025 Scatter-Gather List 00:27:31.025 SGL Command Set: Supported 00:27:31.025 SGL Keyed: Supported 00:27:31.025 SGL Bit Bucket Descriptor: Not Supported 00:27:31.025 SGL Metadata Pointer: Not Supported 00:27:31.025 Oversized SGL: Not Supported 00:27:31.026 SGL Metadata Address: Not Supported 00:27:31.026 SGL Offset: Supported 00:27:31.026 Transport SGL Data Block: Not Supported 00:27:31.026 Replay Protected Memory Block: Not Supported 00:27:31.026 00:27:31.026 Firmware Slot Information 00:27:31.026 ========================= 00:27:31.026 Active slot: 1 00:27:31.026 Slot 1 Firmware Revision: 24.01.1 00:27:31.026 00:27:31.026 00:27:31.026 Commands Supported and Effects 00:27:31.026 ============================== 00:27:31.026 Admin Commands 00:27:31.026 -------------- 00:27:31.026 Get Log Page (02h): Supported 00:27:31.026 Identify (06h): Supported 00:27:31.026 Abort (08h): Supported 00:27:31.026 Set Features (09h): Supported 00:27:31.026 Get Features (0Ah): Supported 00:27:31.026 Asynchronous Event Request (0Ch): Supported 00:27:31.026 Keep Alive (18h): Supported 00:27:31.026 I/O Commands 00:27:31.026 ------------ 00:27:31.026 Flush (00h): Supported LBA-Change 00:27:31.026 Write (01h): Supported LBA-Change 00:27:31.026 Read (02h): Supported 00:27:31.026 Compare (05h): Supported 00:27:31.026 Write Zeroes (08h): Supported LBA-Change 00:27:31.026 Dataset Management (09h): Supported LBA-Change 00:27:31.026 Copy (19h): Supported LBA-Change 00:27:31.026 Unknown (79h): Supported LBA-Change 00:27:31.026 Unknown (7Ah): Supported 00:27:31.026 00:27:31.026 Error Log 00:27:31.026 ========= 00:27:31.026 00:27:31.026 Arbitration 00:27:31.026 =========== 00:27:31.026 Arbitration Burst: 1 00:27:31.026 00:27:31.026 Power Management 00:27:31.026 ================ 00:27:31.026 Number of Power States: 1 00:27:31.026 Current Power State: Power State #0 00:27:31.026 Power State #0: 00:27:31.026 Max Power: 0.00 W 00:27:31.026 Non-Operational State: Operational 00:27:31.026 Entry Latency: Not Reported 00:27:31.026 Exit Latency: Not Reported 00:27:31.026 Relative Read Throughput: 0 00:27:31.026 Relative Read Latency: 0 00:27:31.026 Relative Write Throughput: 0 00:27:31.026 Relative Write Latency: 0 00:27:31.026 Idle Power: Not Reported 00:27:31.026 Active Power: Not Reported 00:27:31.026 Non-Operational Permissive Mode: Not Supported 00:27:31.026 00:27:31.026 Health Information 00:27:31.026 ================== 00:27:31.026 Critical Warnings: 00:27:31.026 Available Spare Space: OK 00:27:31.026 Temperature: OK 00:27:31.026 Device Reliability: OK 00:27:31.026 Read Only: No 00:27:31.026 Volatile Memory Backup: OK 00:27:31.026 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:31.026 Temperature Threshold: [2024-07-24 23:13:03.214685] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.026 [2024-07-24 23:13:03.214691] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.026 [2024-07-24 23:13:03.214695] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8bd460) 00:27:31.026 [2024-07-24 23:13:03.214703] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.026 [2024-07-24 23:13:03.218720] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x928f30, cid 7, qid 0 00:27:31.026 [2024-07-24 23:13:03.218731] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.026 [2024-07-24 23:13:03.218738] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.026 [2024-07-24 23:13:03.218742] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.026 [2024-07-24 23:13:03.218747] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x928f30) on tqpair=0x8bd460 00:27:31.026 [2024-07-24 23:13:03.218778] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:31.026 [2024-07-24 23:13:03.218789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.026 [2024-07-24 23:13:03.218797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.026 [2024-07-24 23:13:03.218804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.026 [2024-07-24 23:13:03.218810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.026 [2024-07-24 23:13:03.218819] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.026 [2024-07-24 23:13:03.218823] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.026 [2024-07-24 23:13:03.218828] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.026 [2024-07-24 23:13:03.218835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.026 [2024-07-24 23:13:03.218848] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.026 [2024-07-24 23:13:03.219080] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.026 [2024-07-24 23:13:03.219087] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.026 [2024-07-24 23:13:03.219091] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.026 [2024-07-24 23:13:03.219096] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.026 [2024-07-24 23:13:03.219103] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.026 [2024-07-24 23:13:03.219110] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.026 [2024-07-24 23:13:03.219115] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.026 [2024-07-24 23:13:03.219122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.026 [2024-07-24 23:13:03.219137] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.026 [2024-07-24 23:13:03.219245] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.026 [2024-07-24 23:13:03.219252] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.026 [2024-07-24 23:13:03.219256] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.026 [2024-07-24 23:13:03.219261] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.027 [2024-07-24 23:13:03.219266] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:31.027 [2024-07-24 23:13:03.219272] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:31.027 [2024-07-24 23:13:03.219282] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.219287] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.219292] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.027 [2024-07-24 23:13:03.219299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.027 [2024-07-24 23:13:03.219310] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.027 [2024-07-24 23:13:03.219400] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.027 [2024-07-24 23:13:03.219406] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.027 [2024-07-24 23:13:03.219411] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.219415] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.027 [2024-07-24 23:13:03.219425] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.219430] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.219434] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.027 [2024-07-24 23:13:03.219441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.027 [2024-07-24 23:13:03.219452] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.027 [2024-07-24 23:13:03.219550] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.027 [2024-07-24 23:13:03.219557] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.027 [2024-07-24 23:13:03.219561] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.219566] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.027 [2024-07-24 23:13:03.219575] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.219579] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.219584] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.027 [2024-07-24 23:13:03.219591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.027 [2024-07-24 23:13:03.219602] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.027 [2024-07-24 23:13:03.219688] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.027 [2024-07-24 23:13:03.219695] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.027 [2024-07-24 23:13:03.219699] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.219704] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.027 [2024-07-24 23:13:03.219721] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.219726] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.219731] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.027 [2024-07-24 23:13:03.219738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.027 [2024-07-24 23:13:03.219750] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.027 [2024-07-24 23:13:03.219837] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.027 [2024-07-24 23:13:03.219844] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.027 [2024-07-24 23:13:03.219848] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.219853] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.027 [2024-07-24 23:13:03.219862] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.219867] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.219871] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.027 [2024-07-24 23:13:03.219878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.027 [2024-07-24 23:13:03.219889] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.027 [2024-07-24 23:13:03.219981] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.027 [2024-07-24 23:13:03.219988] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.027 [2024-07-24 23:13:03.219992] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.219997] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.027 [2024-07-24 23:13:03.220007] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.220012] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.220016] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.027 [2024-07-24 23:13:03.220023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.027 [2024-07-24 23:13:03.220034] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.027 [2024-07-24 23:13:03.220123] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.027 [2024-07-24 23:13:03.220130] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.027 [2024-07-24 23:13:03.220134] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.220139] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.027 [2024-07-24 23:13:03.220148] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.220152] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.220157] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.027 [2024-07-24 23:13:03.220164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.027 [2024-07-24 23:13:03.220175] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.027 [2024-07-24 23:13:03.220261] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.027 [2024-07-24 23:13:03.220268] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.027 [2024-07-24 23:13:03.220272] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.220277] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.027 [2024-07-24 23:13:03.220289] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.220294] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.027 [2024-07-24 23:13:03.220298] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.027 [2024-07-24 23:13:03.220305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.027 [2024-07-24 23:13:03.220316] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.027 [2024-07-24 23:13:03.220408] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.027 [2024-07-24 23:13:03.220415] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.028 [2024-07-24 23:13:03.220419] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.220424] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.028 [2024-07-24 23:13:03.220433] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.220438] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.220442] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.028 [2024-07-24 23:13:03.220449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.028 [2024-07-24 23:13:03.220461] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.028 [2024-07-24 23:13:03.220548] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.028 [2024-07-24 23:13:03.220555] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.028 [2024-07-24 23:13:03.220559] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.220564] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.028 [2024-07-24 23:13:03.220574] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.220579] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.220583] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.028 [2024-07-24 23:13:03.220590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.028 [2024-07-24 23:13:03.220601] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.028 [2024-07-24 23:13:03.220693] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.028 [2024-07-24 23:13:03.220700] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.028 [2024-07-24 23:13:03.220704] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.220709] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.028 [2024-07-24 23:13:03.220722] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.220727] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.220732] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.028 [2024-07-24 23:13:03.220739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.028 [2024-07-24 23:13:03.220750] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.028 [2024-07-24 23:13:03.220843] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.028 [2024-07-24 23:13:03.220850] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.028 [2024-07-24 23:13:03.220854] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.220859] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.028 [2024-07-24 23:13:03.220868] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.220874] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.220879] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.028 [2024-07-24 23:13:03.220886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.028 [2024-07-24 23:13:03.220897] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.028 [2024-07-24 23:13:03.221063] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.028 [2024-07-24 23:13:03.221070] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.028 [2024-07-24 23:13:03.221074] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.221079] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.028 [2024-07-24 23:13:03.221089] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.221094] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.221098] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.028 [2024-07-24 23:13:03.221105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.028 [2024-07-24 23:13:03.221117] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.028 [2024-07-24 23:13:03.221204] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.028 [2024-07-24 23:13:03.221211] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.028 [2024-07-24 23:13:03.221215] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.221220] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.028 [2024-07-24 23:13:03.221229] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.221234] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.221238] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.028 [2024-07-24 23:13:03.221245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.028 [2024-07-24 23:13:03.221256] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.028 [2024-07-24 23:13:03.221346] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.028 [2024-07-24 23:13:03.221352] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.028 [2024-07-24 23:13:03.221357] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.221361] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.028 [2024-07-24 23:13:03.221371] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.221376] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.221380] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.028 [2024-07-24 23:13:03.221387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.028 [2024-07-24 23:13:03.221398] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.028 [2024-07-24 23:13:03.221557] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.028 [2024-07-24 23:13:03.221564] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.028 [2024-07-24 23:13:03.221568] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.221573] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.028 [2024-07-24 23:13:03.221583] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.221588] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.028 [2024-07-24 23:13:03.221592] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.028 [2024-07-24 23:13:03.221600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.028 [2024-07-24 23:13:03.221612] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.029 [2024-07-24 23:13:03.221780] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.029 [2024-07-24 23:13:03.221786] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.029 [2024-07-24 23:13:03.221791] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.221795] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.029 [2024-07-24 23:13:03.221806] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.221811] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.221815] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.029 [2024-07-24 23:13:03.221822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.029 [2024-07-24 23:13:03.221834] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.029 [2024-07-24 23:13:03.221992] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.029 [2024-07-24 23:13:03.221999] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.029 [2024-07-24 23:13:03.222003] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.222007] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.029 [2024-07-24 23:13:03.222018] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.222023] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.222027] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.029 [2024-07-24 23:13:03.222034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.029 [2024-07-24 23:13:03.222045] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.029 [2024-07-24 23:13:03.222135] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.029 [2024-07-24 23:13:03.222142] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.029 [2024-07-24 23:13:03.222146] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.222151] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.029 [2024-07-24 23:13:03.222160] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.222165] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.222169] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.029 [2024-07-24 23:13:03.222176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.029 [2024-07-24 23:13:03.222187] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.029 [2024-07-24 23:13:03.222271] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.029 [2024-07-24 23:13:03.222278] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.029 [2024-07-24 23:13:03.222282] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.222287] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.029 [2024-07-24 23:13:03.222296] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.222301] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.222305] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.029 [2024-07-24 23:13:03.222314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.029 [2024-07-24 23:13:03.222325] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.029 [2024-07-24 23:13:03.222418] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.029 [2024-07-24 23:13:03.222425] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.029 [2024-07-24 23:13:03.222429] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.222434] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.029 [2024-07-24 23:13:03.222443] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.222448] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.222452] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.029 [2024-07-24 23:13:03.222459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.029 [2024-07-24 23:13:03.222471] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.029 [2024-07-24 23:13:03.222631] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.029 [2024-07-24 23:13:03.222637] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.029 [2024-07-24 23:13:03.222642] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.222646] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.029 [2024-07-24 23:13:03.222656] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.222661] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.222666] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.029 [2024-07-24 23:13:03.222673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.029 [2024-07-24 23:13:03.222684] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.029 [2024-07-24 23:13:03.226724] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.029 [2024-07-24 23:13:03.226734] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.029 [2024-07-24 23:13:03.226738] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.226743] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.029 [2024-07-24 23:13:03.226754] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.226759] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.226763] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bd460) 00:27:31.029 [2024-07-24 23:13:03.226771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.029 [2024-07-24 23:13:03.226784] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9289b0, cid 3, qid 0 00:27:31.029 [2024-07-24 23:13:03.226947] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:31.029 [2024-07-24 23:13:03.226954] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:31.029 [2024-07-24 23:13:03.226958] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:31.029 [2024-07-24 23:13:03.226963] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9289b0) on tqpair=0x8bd460 00:27:31.029 [2024-07-24 23:13:03.226971] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:27:31.029 0 Kelvin (-273 Celsius) 00:27:31.029 Available Spare: 0% 00:27:31.029 Available Spare Threshold: 0% 00:27:31.029 Life Percentage Used: 0% 00:27:31.029 Data Units Read: 0 00:27:31.029 Data Units Written: 0 00:27:31.029 Host Read Commands: 0 00:27:31.030 Host Write Commands: 0 00:27:31.030 Controller Busy Time: 0 minutes 00:27:31.030 Power Cycles: 0 00:27:31.030 Power On Hours: 0 hours 00:27:31.030 Unsafe Shutdowns: 0 00:27:31.030 Unrecoverable Media Errors: 0 00:27:31.030 Lifetime Error Log Entries: 0 00:27:31.030 Warning Temperature Time: 0 minutes 00:27:31.030 Critical Temperature Time: 0 minutes 00:27:31.030 00:27:31.030 Number of Queues 00:27:31.030 ================ 00:27:31.030 Number of I/O Submission Queues: 127 00:27:31.030 Number of I/O Completion Queues: 127 00:27:31.030 00:27:31.030 Active Namespaces 00:27:31.030 ================= 00:27:31.030 Namespace ID:1 00:27:31.030 Error Recovery Timeout: Unlimited 00:27:31.030 Command Set Identifier: NVM (00h) 00:27:31.030 Deallocate: Supported 00:27:31.030 Deallocated/Unwritten Error: Not Supported 00:27:31.030 Deallocated Read Value: Unknown 00:27:31.030 Deallocate in Write Zeroes: Not Supported 00:27:31.030 Deallocated Guard Field: 0xFFFF 00:27:31.030 Flush: Supported 00:27:31.030 Reservation: Supported 00:27:31.030 Namespace Sharing Capabilities: Multiple Controllers 00:27:31.030 Size (in LBAs): 131072 (0GiB) 00:27:31.030 Capacity (in LBAs): 131072 (0GiB) 00:27:31.030 Utilization (in LBAs): 131072 (0GiB) 00:27:31.030 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:31.030 EUI64: ABCDEF0123456789 00:27:31.030 UUID: 22bb1d9e-7e32-41da-8d1f-530b134ce8bc 00:27:31.030 Thin Provisioning: Not Supported 00:27:31.030 Per-NS Atomic Units: Yes 00:27:31.030 Atomic Boundary Size (Normal): 0 00:27:31.030 Atomic Boundary Size (PFail): 0 00:27:31.030 Atomic Boundary Offset: 0 00:27:31.030 Maximum Single Source Range Length: 65535 00:27:31.030 Maximum Copy Length: 65535 00:27:31.030 Maximum Source Range Count: 1 00:27:31.030 NGUID/EUI64 Never Reused: No 00:27:31.030 Namespace Write Protected: No 00:27:31.030 Number of LBA Formats: 1 00:27:31.030 Current LBA Format: LBA Format #00 00:27:31.030 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:31.030 00:27:31.030 23:13:03 -- host/identify.sh@51 -- # sync 00:27:31.030 23:13:03 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:31.030 23:13:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:31.030 23:13:03 -- common/autotest_common.sh@10 -- # set +x 00:27:31.030 23:13:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:31.030 23:13:03 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:31.030 23:13:03 -- host/identify.sh@56 -- # nvmftestfini 00:27:31.030 23:13:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:31.030 23:13:03 -- nvmf/common.sh@116 -- # sync 00:27:31.030 23:13:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:31.030 23:13:03 -- nvmf/common.sh@119 -- # set +e 00:27:31.030 23:13:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:31.030 23:13:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:31.030 rmmod nvme_tcp 00:27:31.030 rmmod nvme_fabrics 00:27:31.030 rmmod nvme_keyring 00:27:31.030 23:13:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:31.030 23:13:03 -- nvmf/common.sh@123 -- # set -e 00:27:31.030 23:13:03 -- nvmf/common.sh@124 -- # return 0 00:27:31.030 23:13:03 -- nvmf/common.sh@477 -- # '[' -n 3343304 ']' 00:27:31.030 23:13:03 -- nvmf/common.sh@478 -- # killprocess 3343304 00:27:31.030 23:13:03 -- common/autotest_common.sh@926 -- # '[' -z 3343304 ']' 00:27:31.030 23:13:03 -- common/autotest_common.sh@930 -- # kill -0 3343304 00:27:31.030 23:13:03 -- common/autotest_common.sh@931 -- # uname 00:27:31.030 23:13:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:31.030 23:13:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3343304 00:27:31.030 23:13:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:31.030 23:13:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:31.030 23:13:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3343304' 00:27:31.030 killing process with pid 3343304 00:27:31.031 23:13:03 -- common/autotest_common.sh@945 -- # kill 3343304 00:27:31.031 [2024-07-24 23:13:03.374064] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:31.031 23:13:03 -- common/autotest_common.sh@950 -- # wait 3343304 00:27:31.369 23:13:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:31.369 23:13:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:31.369 23:13:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:31.369 23:13:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:31.369 23:13:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:31.369 23:13:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.369 23:13:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:31.369 23:13:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.274 23:13:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:33.274 00:27:33.274 real 0m10.875s 00:27:33.274 user 0m8.181s 00:27:33.274 sys 0m5.786s 00:27:33.274 23:13:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.274 23:13:05 -- common/autotest_common.sh@10 -- # set +x 00:27:33.274 ************************************ 00:27:33.274 END TEST nvmf_identify 00:27:33.274 ************************************ 00:27:33.274 23:13:05 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:33.274 23:13:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:33.274 23:13:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:33.274 23:13:05 -- common/autotest_common.sh@10 -- # set +x 00:27:33.274 ************************************ 00:27:33.274 START TEST nvmf_perf 00:27:33.274 ************************************ 00:27:33.274 23:13:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:33.533 * Looking for test storage... 00:27:33.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:33.533 23:13:05 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:33.533 23:13:05 -- nvmf/common.sh@7 -- # uname -s 00:27:33.533 23:13:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.533 23:13:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.533 23:13:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.533 23:13:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.533 23:13:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:33.533 23:13:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:33.533 23:13:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.533 23:13:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:33.533 23:13:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.533 23:13:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:33.533 23:13:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:33.533 23:13:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:33.533 23:13:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.533 23:13:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:33.533 23:13:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:33.533 23:13:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:33.533 23:13:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.533 23:13:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.533 23:13:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.533 23:13:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.534 23:13:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.534 23:13:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.534 23:13:05 -- paths/export.sh@5 -- # export PATH 00:27:33.534 23:13:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.534 23:13:05 -- nvmf/common.sh@46 -- # : 0 00:27:33.534 23:13:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:33.534 23:13:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:33.534 23:13:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:33.534 23:13:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.534 23:13:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.534 23:13:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:33.534 23:13:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:33.534 23:13:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:33.534 23:13:05 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:33.534 23:13:05 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:33.534 23:13:05 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:33.534 23:13:05 -- host/perf.sh@17 -- # nvmftestinit 00:27:33.534 23:13:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:33.534 23:13:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.534 23:13:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:33.534 23:13:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:33.534 23:13:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:33.534 23:13:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.534 23:13:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:33.534 23:13:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.534 23:13:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:33.534 23:13:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:33.534 23:13:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:33.534 23:13:05 -- common/autotest_common.sh@10 -- # set +x 00:27:40.114 23:13:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:40.114 23:13:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:40.114 23:13:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:40.114 23:13:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:40.114 23:13:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:40.114 23:13:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:40.114 23:13:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:40.114 23:13:12 -- nvmf/common.sh@294 -- # net_devs=() 00:27:40.114 23:13:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:40.114 23:13:12 -- nvmf/common.sh@295 -- # e810=() 00:27:40.114 23:13:12 -- nvmf/common.sh@295 -- # local -ga e810 00:27:40.114 23:13:12 -- nvmf/common.sh@296 -- # x722=() 00:27:40.114 23:13:12 -- nvmf/common.sh@296 -- # local -ga x722 00:27:40.114 23:13:12 -- nvmf/common.sh@297 -- # mlx=() 00:27:40.114 23:13:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:40.114 23:13:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.114 23:13:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.114 23:13:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.114 23:13:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.114 23:13:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.114 23:13:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.114 23:13:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.114 23:13:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.114 23:13:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.114 23:13:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.114 23:13:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.114 23:13:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:40.114 23:13:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:40.114 23:13:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:40.114 23:13:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:40.114 23:13:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:40.114 23:13:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:40.114 23:13:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:40.114 23:13:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:40.114 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:40.114 23:13:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:40.114 23:13:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:40.114 23:13:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.114 23:13:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.114 23:13:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:40.114 23:13:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:40.114 23:13:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:40.114 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:40.114 23:13:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:40.114 23:13:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:40.114 23:13:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.114 23:13:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.114 23:13:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:40.114 23:13:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:40.114 23:13:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:40.114 23:13:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:40.114 23:13:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:40.115 23:13:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.115 23:13:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:40.115 23:13:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.115 23:13:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:40.115 Found net devices under 0000:af:00.0: cvl_0_0 00:27:40.115 23:13:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.115 23:13:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:40.115 23:13:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.115 23:13:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:40.115 23:13:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.115 23:13:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:40.115 Found net devices under 0000:af:00.1: cvl_0_1 00:27:40.115 23:13:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.115 23:13:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:40.115 23:13:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:40.115 23:13:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:40.115 23:13:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:40.115 23:13:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:40.115 23:13:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.115 23:13:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.115 23:13:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.115 23:13:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:40.115 23:13:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.115 23:13:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.115 23:13:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:40.115 23:13:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.115 23:13:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.115 23:13:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:40.115 23:13:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:40.115 23:13:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.115 23:13:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.115 23:13:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.115 23:13:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.115 23:13:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:40.115 23:13:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.115 23:13:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.115 23:13:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.115 23:13:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:40.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:27:40.115 00:27:40.115 --- 10.0.0.2 ping statistics --- 00:27:40.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.115 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:27:40.115 23:13:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:27:40.115 00:27:40.115 --- 10.0.0.1 ping statistics --- 00:27:40.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.115 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:27:40.115 23:13:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.115 23:13:12 -- nvmf/common.sh@410 -- # return 0 00:27:40.115 23:13:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:40.115 23:13:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.115 23:13:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:40.115 23:13:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:40.115 23:13:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.115 23:13:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:40.115 23:13:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:40.115 23:13:12 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:40.115 23:13:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:40.115 23:13:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:40.115 23:13:12 -- common/autotest_common.sh@10 -- # set +x 00:27:40.115 23:13:12 -- nvmf/common.sh@469 -- # nvmfpid=3347041 00:27:40.115 23:13:12 -- nvmf/common.sh@470 -- # waitforlisten 3347041 00:27:40.115 23:13:12 -- common/autotest_common.sh@819 -- # '[' -z 3347041 ']' 00:27:40.115 23:13:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.115 23:13:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:40.115 23:13:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.115 23:13:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:40.115 23:13:12 -- common/autotest_common.sh@10 -- # set +x 00:27:40.115 23:13:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:40.115 [2024-07-24 23:13:12.421331] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:40.115 [2024-07-24 23:13:12.421381] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.115 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.115 [2024-07-24 23:13:12.498299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:40.115 [2024-07-24 23:13:12.537408] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:40.115 [2024-07-24 23:13:12.537514] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.115 [2024-07-24 23:13:12.537524] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.115 [2024-07-24 23:13:12.537534] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.115 [2024-07-24 23:13:12.537574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.115 [2024-07-24 23:13:12.537685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:40.115 [2024-07-24 23:13:12.537704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:40.115 [2024-07-24 23:13:12.537705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.052 23:13:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:41.052 23:13:13 -- common/autotest_common.sh@852 -- # return 0 00:27:41.052 23:13:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:41.052 23:13:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:41.052 23:13:13 -- common/autotest_common.sh@10 -- # set +x 00:27:41.052 23:13:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:41.052 23:13:13 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:41.052 23:13:13 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:44.341 23:13:16 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:44.341 23:13:16 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:44.341 23:13:16 -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:27:44.341 23:13:16 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:44.341 23:13:16 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:44.341 23:13:16 -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:27:44.341 23:13:16 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:44.341 23:13:16 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:44.341 23:13:16 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:44.600 [2024-07-24 23:13:16.824051] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:44.600 23:13:16 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:44.859 23:13:17 -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:44.859 23:13:17 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:44.859 23:13:17 -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:44.859 23:13:17 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:45.118 23:13:17 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:45.118 [2024-07-24 23:13:17.540101] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:45.377 23:13:17 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:45.377 23:13:17 -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:27:45.377 23:13:17 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:27:45.377 23:13:17 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:45.377 23:13:17 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:27:46.756 Initializing NVMe Controllers 00:27:46.756 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:27:46.756 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:27:46.756 Initialization complete. Launching workers. 00:27:46.756 ======================================================== 00:27:46.756 Latency(us) 00:27:46.756 Device Information : IOPS MiB/s Average min max 00:27:46.756 PCIE (0000:d8:00.0) NSID 1 from core 0: 103893.50 405.83 307.52 33.80 4385.26 00:27:46.756 ======================================================== 00:27:46.756 Total : 103893.50 405.83 307.52 33.80 4385.26 00:27:46.756 00:27:46.756 23:13:19 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:46.756 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.136 Initializing NVMe Controllers 00:27:48.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:48.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:48.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:48.136 Initialization complete. Launching workers. 00:27:48.136 ======================================================== 00:27:48.136 Latency(us) 00:27:48.136 Device Information : IOPS MiB/s Average min max 00:27:48.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 73.00 0.29 14242.95 236.62 45080.00 00:27:48.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 60.00 0.23 17290.66 6983.91 47898.62 00:27:48.136 ======================================================== 00:27:48.136 Total : 133.00 0.52 15617.85 236.62 47898.62 00:27:48.136 00:27:48.136 23:13:20 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:48.136 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.515 Initializing NVMe Controllers 00:27:49.515 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:49.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:49.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:49.515 Initialization complete. Launching workers. 00:27:49.515 ======================================================== 00:27:49.515 Latency(us) 00:27:49.515 Device Information : IOPS MiB/s Average min max 00:27:49.515 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10266.00 40.10 3117.58 590.50 9248.70 00:27:49.515 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3918.00 15.30 8211.34 7338.30 16396.96 00:27:49.515 ======================================================== 00:27:49.515 Total : 14184.00 55.41 4524.61 590.50 16396.96 00:27:49.515 00:27:49.515 23:13:21 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:49.515 23:13:21 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:49.515 23:13:21 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:49.515 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.051 Initializing NVMe Controllers 00:27:52.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:52.051 Controller IO queue size 128, less than required. 00:27:52.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:52.051 Controller IO queue size 128, less than required. 00:27:52.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:52.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:52.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:52.051 Initialization complete. Launching workers. 00:27:52.051 ======================================================== 00:27:52.051 Latency(us) 00:27:52.051 Device Information : IOPS MiB/s Average min max 00:27:52.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1018.37 254.59 130985.46 71464.10 212542.13 00:27:52.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 589.92 147.48 228944.12 86848.62 367380.49 00:27:52.051 ======================================================== 00:27:52.051 Total : 1608.29 402.07 166916.83 71464.10 367380.49 00:27:52.051 00:27:52.051 23:13:24 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:52.051 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.051 No valid NVMe controllers or AIO or URING devices found 00:27:52.051 Initializing NVMe Controllers 00:27:52.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:52.051 Controller IO queue size 128, less than required. 00:27:52.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:52.051 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:52.051 Controller IO queue size 128, less than required. 00:27:52.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:52.051 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:52.051 WARNING: Some requested NVMe devices were skipped 00:27:52.051 23:13:24 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:52.051 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.642 Initializing NVMe Controllers 00:27:54.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:54.642 Controller IO queue size 128, less than required. 00:27:54.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:54.642 Controller IO queue size 128, less than required. 00:27:54.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:54.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:54.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:54.642 Initialization complete. Launching workers. 00:27:54.642 00:27:54.642 ==================== 00:27:54.642 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:54.642 TCP transport: 00:27:54.642 polls: 41774 00:27:54.642 idle_polls: 13769 00:27:54.642 sock_completions: 28005 00:27:54.642 nvme_completions: 3889 00:27:54.642 submitted_requests: 5939 00:27:54.642 queued_requests: 1 00:27:54.642 00:27:54.642 ==================== 00:27:54.642 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:54.642 TCP transport: 00:27:54.642 polls: 41928 00:27:54.642 idle_polls: 13916 00:27:54.642 sock_completions: 28012 00:27:54.642 nvme_completions: 4038 00:27:54.642 submitted_requests: 6240 00:27:54.642 queued_requests: 1 00:27:54.642 ======================================================== 00:27:54.642 Latency(us) 00:27:54.642 Device Information : IOPS MiB/s Average min max 00:27:54.642 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1035.99 259.00 127936.63 73797.81 189165.83 00:27:54.642 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1072.99 268.25 123313.04 60360.76 168866.50 00:27:54.642 ======================================================== 00:27:54.642 Total : 2108.98 527.24 125584.28 60360.76 189165.83 00:27:54.642 00:27:54.642 23:13:27 -- host/perf.sh@66 -- # sync 00:27:54.642 23:13:27 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:54.901 23:13:27 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:54.901 23:13:27 -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:27:54.901 23:13:27 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:00.178 23:13:32 -- host/perf.sh@72 -- # ls_guid=656b906c-cda7-4598-9df6-b1cb54f539f7 00:28:00.178 23:13:32 -- host/perf.sh@73 -- # get_lvs_free_mb 656b906c-cda7-4598-9df6-b1cb54f539f7 00:28:00.178 23:13:32 -- common/autotest_common.sh@1343 -- # local lvs_uuid=656b906c-cda7-4598-9df6-b1cb54f539f7 00:28:00.178 23:13:32 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:00.178 23:13:32 -- common/autotest_common.sh@1345 -- # local fc 00:28:00.178 23:13:32 -- common/autotest_common.sh@1346 -- # local cs 00:28:00.178 23:13:32 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:00.178 23:13:32 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:00.178 { 00:28:00.178 "uuid": "656b906c-cda7-4598-9df6-b1cb54f539f7", 00:28:00.178 "name": "lvs_0", 00:28:00.178 "base_bdev": "Nvme0n1", 00:28:00.178 "total_data_clusters": 381173, 00:28:00.178 "free_clusters": 381173, 00:28:00.178 "block_size": 512, 00:28:00.178 "cluster_size": 4194304 00:28:00.178 } 00:28:00.178 ]' 00:28:00.178 23:13:32 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="656b906c-cda7-4598-9df6-b1cb54f539f7") .free_clusters' 00:28:00.178 23:13:32 -- common/autotest_common.sh@1348 -- # fc=381173 00:28:00.178 23:13:32 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="656b906c-cda7-4598-9df6-b1cb54f539f7") .cluster_size' 00:28:00.437 23:13:32 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:00.437 23:13:32 -- common/autotest_common.sh@1352 -- # free_mb=1524692 00:28:00.437 23:13:32 -- common/autotest_common.sh@1353 -- # echo 1524692 00:28:00.437 1524692 00:28:00.437 23:13:32 -- host/perf.sh@77 -- # '[' 1524692 -gt 20480 ']' 00:28:00.437 23:13:32 -- host/perf.sh@78 -- # free_mb=20480 00:28:00.437 23:13:32 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 656b906c-cda7-4598-9df6-b1cb54f539f7 lbd_0 20480 00:28:00.437 23:13:32 -- host/perf.sh@80 -- # lb_guid=2282458a-5bb4-4c91-8036-34a0d91a630c 00:28:00.437 23:13:32 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 2282458a-5bb4-4c91-8036-34a0d91a630c lvs_n_0 00:28:01.815 23:13:34 -- host/perf.sh@83 -- # ls_nested_guid=22dac6cf-c898-4e60-89e1-38d7c557ef22 00:28:01.815 23:13:34 -- host/perf.sh@84 -- # get_lvs_free_mb 22dac6cf-c898-4e60-89e1-38d7c557ef22 00:28:01.815 23:13:34 -- common/autotest_common.sh@1343 -- # local lvs_uuid=22dac6cf-c898-4e60-89e1-38d7c557ef22 00:28:01.815 23:13:34 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:01.815 23:13:34 -- common/autotest_common.sh@1345 -- # local fc 00:28:01.815 23:13:34 -- common/autotest_common.sh@1346 -- # local cs 00:28:01.815 23:13:34 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:02.074 23:13:34 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:02.074 { 00:28:02.074 "uuid": "656b906c-cda7-4598-9df6-b1cb54f539f7", 00:28:02.074 "name": "lvs_0", 00:28:02.074 "base_bdev": "Nvme0n1", 00:28:02.074 "total_data_clusters": 381173, 00:28:02.074 "free_clusters": 376053, 00:28:02.074 "block_size": 512, 00:28:02.074 "cluster_size": 4194304 00:28:02.074 }, 00:28:02.074 { 00:28:02.074 "uuid": "22dac6cf-c898-4e60-89e1-38d7c557ef22", 00:28:02.074 "name": "lvs_n_0", 00:28:02.074 "base_bdev": "2282458a-5bb4-4c91-8036-34a0d91a630c", 00:28:02.074 "total_data_clusters": 5114, 00:28:02.074 "free_clusters": 5114, 00:28:02.074 "block_size": 512, 00:28:02.074 "cluster_size": 4194304 00:28:02.074 } 00:28:02.074 ]' 00:28:02.074 23:13:34 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="22dac6cf-c898-4e60-89e1-38d7c557ef22") .free_clusters' 00:28:02.074 23:13:34 -- common/autotest_common.sh@1348 -- # fc=5114 00:28:02.074 23:13:34 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="22dac6cf-c898-4e60-89e1-38d7c557ef22") .cluster_size' 00:28:02.074 23:13:34 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:02.074 23:13:34 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:28:02.074 23:13:34 -- common/autotest_common.sh@1353 -- # echo 20456 00:28:02.074 20456 00:28:02.074 23:13:34 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:02.074 23:13:34 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 22dac6cf-c898-4e60-89e1-38d7c557ef22 lbd_nest_0 20456 00:28:02.333 23:13:34 -- host/perf.sh@88 -- # lb_nested_guid=5b743dfe-5a7d-4977-9653-16b534d52c82 00:28:02.334 23:13:34 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:02.334 23:13:34 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:02.334 23:13:34 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 5b743dfe-5a7d-4977-9653-16b534d52c82 00:28:02.593 23:13:34 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:02.854 23:13:35 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:02.854 23:13:35 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:02.854 23:13:35 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:02.854 23:13:35 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:02.854 23:13:35 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:02.854 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.065 Initializing NVMe Controllers 00:28:15.065 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:15.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:15.065 Initialization complete. Launching workers. 00:28:15.065 ======================================================== 00:28:15.065 Latency(us) 00:28:15.065 Device Information : IOPS MiB/s Average min max 00:28:15.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.20 0.02 22662.54 216.07 45982.77 00:28:15.065 ======================================================== 00:28:15.065 Total : 44.20 0.02 22662.54 216.07 45982.77 00:28:15.065 00:28:15.065 23:13:45 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:15.065 23:13:45 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:15.065 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.047 Initializing NVMe Controllers 00:28:25.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:25.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:25.047 Initialization complete. Launching workers. 00:28:25.047 ======================================================== 00:28:25.047 Latency(us) 00:28:25.047 Device Information : IOPS MiB/s Average min max 00:28:25.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.40 9.93 12600.08 5031.71 50877.58 00:28:25.047 ======================================================== 00:28:25.047 Total : 79.40 9.93 12600.08 5031.71 50877.58 00:28:25.047 00:28:25.047 23:13:55 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:25.047 23:13:55 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:25.047 23:13:55 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:25.047 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.084 Initializing NVMe Controllers 00:28:35.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:35.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:35.084 Initialization complete. Launching workers. 00:28:35.084 ======================================================== 00:28:35.084 Latency(us) 00:28:35.084 Device Information : IOPS MiB/s Average min max 00:28:35.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9187.69 4.49 3483.33 274.13 9619.05 00:28:35.084 ======================================================== 00:28:35.085 Total : 9187.69 4.49 3483.33 274.13 9619.05 00:28:35.085 00:28:35.085 23:14:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:35.085 23:14:06 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:35.085 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.066 Initializing NVMe Controllers 00:28:45.066 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.066 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:45.066 Initialization complete. Launching workers. 00:28:45.066 ======================================================== 00:28:45.066 Latency(us) 00:28:45.066 Device Information : IOPS MiB/s Average min max 00:28:45.066 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1908.87 238.61 16764.21 1299.20 39047.45 00:28:45.066 ======================================================== 00:28:45.066 Total : 1908.87 238.61 16764.21 1299.20 39047.45 00:28:45.066 00:28:45.066 23:14:16 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:45.066 23:14:16 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:45.066 23:14:16 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:45.066 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.045 Initializing NVMe Controllers 00:28:55.045 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:55.045 Controller IO queue size 128, less than required. 00:28:55.045 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:55.045 Initialization complete. Launching workers. 00:28:55.045 ======================================================== 00:28:55.045 Latency(us) 00:28:55.045 Device Information : IOPS MiB/s Average min max 00:28:55.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15907.42 7.77 8051.81 1387.23 19642.10 00:28:55.045 ======================================================== 00:28:55.045 Total : 15907.42 7.77 8051.81 1387.23 19642.10 00:28:55.045 00:28:55.045 23:14:26 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:55.045 23:14:26 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:55.045 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.027 Initializing NVMe Controllers 00:29:05.027 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:05.027 Controller IO queue size 128, less than required. 00:29:05.027 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:05.027 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:05.027 Initialization complete. Launching workers. 00:29:05.027 ======================================================== 00:29:05.027 Latency(us) 00:29:05.027 Device Information : IOPS MiB/s Average min max 00:29:05.027 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1206.60 150.82 106350.13 22724.91 200101.12 00:29:05.027 ======================================================== 00:29:05.027 Total : 1206.60 150.82 106350.13 22724.91 200101.12 00:29:05.027 00:29:05.027 23:14:37 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:05.027 23:14:37 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5b743dfe-5a7d-4977-9653-16b534d52c82 00:29:05.964 23:14:38 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:05.964 23:14:38 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2282458a-5bb4-4c91-8036-34a0d91a630c 00:29:06.223 23:14:38 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:06.482 23:14:38 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:06.482 23:14:38 -- host/perf.sh@114 -- # nvmftestfini 00:29:06.482 23:14:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:06.482 23:14:38 -- nvmf/common.sh@116 -- # sync 00:29:06.482 23:14:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:06.482 23:14:38 -- nvmf/common.sh@119 -- # set +e 00:29:06.482 23:14:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:06.482 23:14:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:06.482 rmmod nvme_tcp 00:29:06.482 rmmod nvme_fabrics 00:29:06.482 rmmod nvme_keyring 00:29:06.482 23:14:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:06.482 23:14:38 -- nvmf/common.sh@123 -- # set -e 00:29:06.482 23:14:38 -- nvmf/common.sh@124 -- # return 0 00:29:06.482 23:14:38 -- nvmf/common.sh@477 -- # '[' -n 3347041 ']' 00:29:06.482 23:14:38 -- nvmf/common.sh@478 -- # killprocess 3347041 00:29:06.482 23:14:38 -- common/autotest_common.sh@926 -- # '[' -z 3347041 ']' 00:29:06.482 23:14:38 -- common/autotest_common.sh@930 -- # kill -0 3347041 00:29:06.482 23:14:38 -- common/autotest_common.sh@931 -- # uname 00:29:06.482 23:14:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:06.482 23:14:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3347041 00:29:06.482 23:14:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:06.482 23:14:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:06.482 23:14:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3347041' 00:29:06.482 killing process with pid 3347041 00:29:06.482 23:14:38 -- common/autotest_common.sh@945 -- # kill 3347041 00:29:06.482 23:14:38 -- common/autotest_common.sh@950 -- # wait 3347041 00:29:08.413 23:14:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:08.413 23:14:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:08.413 23:14:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:08.413 23:14:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:08.413 23:14:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:08.413 23:14:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.413 23:14:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:08.413 23:14:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.951 23:14:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:10.951 00:29:10.951 real 1m37.203s 00:29:10.951 user 5m43.165s 00:29:10.951 sys 0m20.093s 00:29:10.951 23:14:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:10.951 23:14:42 -- common/autotest_common.sh@10 -- # set +x 00:29:10.951 ************************************ 00:29:10.951 END TEST nvmf_perf 00:29:10.951 ************************************ 00:29:10.951 23:14:42 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:10.951 23:14:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:10.951 23:14:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:10.951 23:14:42 -- common/autotest_common.sh@10 -- # set +x 00:29:10.951 ************************************ 00:29:10.951 START TEST nvmf_fio_host 00:29:10.951 ************************************ 00:29:10.951 23:14:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:10.951 * Looking for test storage... 00:29:10.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:10.951 23:14:43 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:10.951 23:14:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:10.951 23:14:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:10.951 23:14:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:10.951 23:14:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.951 23:14:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.951 23:14:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.951 23:14:43 -- paths/export.sh@5 -- # export PATH 00:29:10.951 23:14:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.951 23:14:43 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:10.951 23:14:43 -- nvmf/common.sh@7 -- # uname -s 00:29:10.951 23:14:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:10.951 23:14:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:10.951 23:14:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:10.951 23:14:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:10.951 23:14:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:10.951 23:14:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:10.951 23:14:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:10.951 23:14:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:10.951 23:14:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:10.951 23:14:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:10.951 23:14:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:10.951 23:14:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:10.951 23:14:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:10.951 23:14:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:10.951 23:14:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:10.951 23:14:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:10.951 23:14:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:10.951 23:14:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:10.951 23:14:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:10.951 23:14:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.951 23:14:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.951 23:14:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.951 23:14:43 -- paths/export.sh@5 -- # export PATH 00:29:10.951 23:14:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.951 23:14:43 -- nvmf/common.sh@46 -- # : 0 00:29:10.951 23:14:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:10.951 23:14:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:10.951 23:14:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:10.951 23:14:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:10.951 23:14:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:10.951 23:14:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:10.951 23:14:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:10.951 23:14:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:10.951 23:14:43 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:10.951 23:14:43 -- host/fio.sh@14 -- # nvmftestinit 00:29:10.951 23:14:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:10.951 23:14:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:10.951 23:14:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:10.952 23:14:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:10.952 23:14:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:10.952 23:14:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.952 23:14:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:10.952 23:14:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.952 23:14:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:10.952 23:14:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:10.952 23:14:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:10.952 23:14:43 -- common/autotest_common.sh@10 -- # set +x 00:29:17.521 23:14:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:17.521 23:14:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:17.521 23:14:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:17.521 23:14:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:17.521 23:14:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:17.521 23:14:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:17.521 23:14:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:17.521 23:14:49 -- nvmf/common.sh@294 -- # net_devs=() 00:29:17.521 23:14:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:17.521 23:14:49 -- nvmf/common.sh@295 -- # e810=() 00:29:17.521 23:14:49 -- nvmf/common.sh@295 -- # local -ga e810 00:29:17.521 23:14:49 -- nvmf/common.sh@296 -- # x722=() 00:29:17.521 23:14:49 -- nvmf/common.sh@296 -- # local -ga x722 00:29:17.521 23:14:49 -- nvmf/common.sh@297 -- # mlx=() 00:29:17.521 23:14:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:17.521 23:14:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:17.521 23:14:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:17.521 23:14:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:17.521 23:14:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:17.521 23:14:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:17.521 23:14:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:17.521 23:14:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:17.521 23:14:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:17.521 23:14:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:17.521 23:14:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:17.521 23:14:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:17.521 23:14:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:17.521 23:14:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:17.521 23:14:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:17.521 23:14:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:17.521 23:14:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:17.521 23:14:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:17.521 23:14:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:17.521 23:14:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:17.521 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:17.521 23:14:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:17.521 23:14:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:17.521 23:14:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.521 23:14:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.521 23:14:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:17.521 23:14:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:17.521 23:14:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:17.521 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:17.521 23:14:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:17.521 23:14:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:17.521 23:14:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.521 23:14:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.521 23:14:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:17.521 23:14:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:17.521 23:14:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:17.521 23:14:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:17.521 23:14:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:17.521 23:14:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.521 23:14:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:17.521 23:14:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.521 23:14:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:17.521 Found net devices under 0000:af:00.0: cvl_0_0 00:29:17.521 23:14:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.521 23:14:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:17.521 23:14:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.521 23:14:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:17.521 23:14:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.521 23:14:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:17.521 Found net devices under 0000:af:00.1: cvl_0_1 00:29:17.521 23:14:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.521 23:14:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:17.521 23:14:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:17.521 23:14:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:17.521 23:14:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:17.521 23:14:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:17.521 23:14:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:17.521 23:14:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:17.521 23:14:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:17.521 23:14:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:17.521 23:14:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:17.521 23:14:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:17.521 23:14:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:17.521 23:14:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:17.521 23:14:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:17.521 23:14:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:17.521 23:14:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:17.521 23:14:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:17.521 23:14:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:17.521 23:14:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:17.521 23:14:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:17.521 23:14:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:17.521 23:14:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:17.521 23:14:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:17.521 23:14:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:17.521 23:14:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:17.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:17.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:29:17.521 00:29:17.521 --- 10.0.0.2 ping statistics --- 00:29:17.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.521 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:29:17.521 23:14:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:17.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:17.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:29:17.521 00:29:17.521 --- 10.0.0.1 ping statistics --- 00:29:17.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.521 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:29:17.521 23:14:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:17.521 23:14:49 -- nvmf/common.sh@410 -- # return 0 00:29:17.521 23:14:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:17.521 23:14:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:17.521 23:14:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:17.521 23:14:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:17.521 23:14:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:17.521 23:14:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:17.521 23:14:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:17.521 23:14:49 -- host/fio.sh@16 -- # [[ y != y ]] 00:29:17.521 23:14:49 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:17.521 23:14:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:17.521 23:14:49 -- common/autotest_common.sh@10 -- # set +x 00:29:17.521 23:14:49 -- host/fio.sh@24 -- # nvmfpid=3365264 00:29:17.521 23:14:49 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:17.521 23:14:49 -- host/fio.sh@28 -- # waitforlisten 3365264 00:29:17.521 23:14:49 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:17.521 23:14:49 -- common/autotest_common.sh@819 -- # '[' -z 3365264 ']' 00:29:17.521 23:14:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.521 23:14:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:17.521 23:14:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.521 23:14:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:17.521 23:14:49 -- common/autotest_common.sh@10 -- # set +x 00:29:17.521 [2024-07-24 23:14:49.632227] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:17.521 [2024-07-24 23:14:49.632274] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.521 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.521 [2024-07-24 23:14:49.708857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:17.521 [2024-07-24 23:14:49.747374] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:17.521 [2024-07-24 23:14:49.747486] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.521 [2024-07-24 23:14:49.747496] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.521 [2024-07-24 23:14:49.747505] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.521 [2024-07-24 23:14:49.747549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.521 [2024-07-24 23:14:49.747570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:17.521 [2024-07-24 23:14:49.747658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:17.521 [2024-07-24 23:14:49.747661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.088 23:14:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:18.088 23:14:50 -- common/autotest_common.sh@852 -- # return 0 00:29:18.088 23:14:50 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:18.346 [2024-07-24 23:14:50.566365] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:18.346 23:14:50 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:18.346 23:14:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:18.346 23:14:50 -- common/autotest_common.sh@10 -- # set +x 00:29:18.346 23:14:50 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:18.604 Malloc1 00:29:18.604 23:14:50 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:18.604 23:14:51 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:18.862 23:14:51 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:19.120 [2024-07-24 23:14:51.331726] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.120 23:14:51 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:19.120 23:14:51 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:19.120 23:14:51 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:19.120 23:14:51 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:19.120 23:14:51 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:19.120 23:14:51 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:19.120 23:14:51 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:19.120 23:14:51 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:19.120 23:14:51 -- common/autotest_common.sh@1320 -- # shift 00:29:19.120 23:14:51 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:19.120 23:14:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:19.403 23:14:51 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:19.403 23:14:51 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:19.403 23:14:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:19.403 23:14:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:19.403 23:14:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:19.403 23:14:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:19.403 23:14:51 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:19.403 23:14:51 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:19.403 23:14:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:19.403 23:14:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:19.403 23:14:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:19.403 23:14:51 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:19.403 23:14:51 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:19.668 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:19.668 fio-3.35 00:29:19.668 Starting 1 thread 00:29:19.668 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.201 00:29:22.201 test: (groupid=0, jobs=1): err= 0: pid=3365899: Wed Jul 24 23:14:54 2024 00:29:22.201 read: IOPS=13.1k, BW=51.0MiB/s (53.5MB/s)(102MiB/2005msec) 00:29:22.201 slat (nsec): min=1484, max=243456, avg=1608.09, stdev=2178.41 00:29:22.201 clat (usec): min=3111, max=9648, avg=5434.83, stdev=425.12 00:29:22.201 lat (usec): min=3114, max=9649, avg=5436.44, stdev=425.19 00:29:22.201 clat percentiles (usec): 00:29:22.201 | 1.00th=[ 4424], 5.00th=[ 4752], 10.00th=[ 4948], 20.00th=[ 5145], 00:29:22.201 | 30.00th=[ 5211], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5538], 00:29:22.201 | 70.00th=[ 5604], 80.00th=[ 5735], 90.00th=[ 5932], 95.00th=[ 6063], 00:29:22.201 | 99.00th=[ 6390], 99.50th=[ 6652], 99.90th=[ 8356], 99.95th=[ 8586], 00:29:22.201 | 99.99th=[ 9634] 00:29:22.201 bw ( KiB/s): min=51776, max=52568, per=100.00%, avg=52248.00, stdev=335.43, samples=4 00:29:22.201 iops : min=12944, max=13142, avg=13062.00, stdev=83.86, samples=4 00:29:22.201 write: IOPS=13.1k, BW=51.0MiB/s (53.5MB/s)(102MiB/2005msec); 0 zone resets 00:29:22.201 slat (nsec): min=1537, max=226033, avg=1696.77, stdev=1548.35 00:29:22.201 clat (usec): min=2466, max=8520, avg=4340.45, stdev=365.07 00:29:22.201 lat (usec): min=2481, max=8521, avg=4342.14, stdev=365.20 00:29:22.201 clat percentiles (usec): 00:29:22.201 | 1.00th=[ 3458], 5.00th=[ 3818], 10.00th=[ 3916], 20.00th=[ 4080], 00:29:22.201 | 30.00th=[ 4178], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:29:22.201 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4817], 00:29:22.201 | 99.00th=[ 5080], 99.50th=[ 5342], 99.90th=[ 6980], 99.95th=[ 7767], 00:29:22.201 | 99.99th=[ 8455] 00:29:22.201 bw ( KiB/s): min=52064, max=52544, per=99.99%, avg=52248.00, stdev=220.74, samples=4 00:29:22.201 iops : min=13016, max=13136, avg=13062.00, stdev=55.18, samples=4 00:29:22.201 lat (msec) : 4=7.39%, 10=92.61% 00:29:22.201 cpu : usr=60.23%, sys=33.48%, ctx=33, majf=0, minf=4 00:29:22.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:22.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:22.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:22.201 issued rwts: total=26183,26192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:22.201 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:22.201 00:29:22.201 Run status group 0 (all jobs): 00:29:22.201 READ: bw=51.0MiB/s (53.5MB/s), 51.0MiB/s-51.0MiB/s (53.5MB/s-53.5MB/s), io=102MiB (107MB), run=2005-2005msec 00:29:22.201 WRITE: bw=51.0MiB/s (53.5MB/s), 51.0MiB/s-51.0MiB/s (53.5MB/s-53.5MB/s), io=102MiB (107MB), run=2005-2005msec 00:29:22.201 23:14:54 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:22.201 23:14:54 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:22.201 23:14:54 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:22.201 23:14:54 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:22.201 23:14:54 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:22.201 23:14:54 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:22.201 23:14:54 -- common/autotest_common.sh@1320 -- # shift 00:29:22.201 23:14:54 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:22.201 23:14:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:22.201 23:14:54 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:22.201 23:14:54 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:22.201 23:14:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:22.201 23:14:54 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:22.201 23:14:54 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:22.201 23:14:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:22.201 23:14:54 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:22.201 23:14:54 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:22.201 23:14:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:22.201 23:14:54 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:22.201 23:14:54 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:22.201 23:14:54 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:22.201 23:14:54 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:22.460 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:22.460 fio-3.35 00:29:22.460 Starting 1 thread 00:29:22.460 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.997 00:29:24.997 test: (groupid=0, jobs=1): err= 0: pid=3366559: Wed Jul 24 23:14:57 2024 00:29:24.997 read: IOPS=10.9k, BW=170MiB/s (178MB/s)(340MiB/2004msec) 00:29:24.997 slat (nsec): min=2309, max=88609, avg=2686.05, stdev=1360.47 00:29:24.997 clat (usec): min=1341, max=51444, avg=7080.33, stdev=2951.35 00:29:24.997 lat (usec): min=1343, max=51447, avg=7083.02, stdev=2951.58 00:29:24.997 clat percentiles (usec): 00:29:24.997 | 1.00th=[ 3523], 5.00th=[ 4293], 10.00th=[ 4686], 20.00th=[ 5342], 00:29:24.997 | 30.00th=[ 5800], 40.00th=[ 6194], 50.00th=[ 6718], 60.00th=[ 7242], 00:29:24.997 | 70.00th=[ 7832], 80.00th=[ 8356], 90.00th=[ 9503], 95.00th=[10814], 00:29:24.997 | 99.00th=[13304], 99.50th=[14484], 99.90th=[50070], 99.95th=[51119], 00:29:24.997 | 99.99th=[51119] 00:29:24.997 bw ( KiB/s): min=78656, max=96063, per=50.55%, avg=87855.75, stdev=7360.30, samples=4 00:29:24.997 iops : min= 4916, max= 6003, avg=5490.75, stdev=459.67, samples=4 00:29:24.997 write: IOPS=6490, BW=101MiB/s (106MB/s)(180MiB/1774msec); 0 zone resets 00:29:24.997 slat (usec): min=28, max=377, avg=30.08, stdev= 7.15 00:29:24.997 clat (usec): min=2579, max=52727, avg=8189.67, stdev=3517.83 00:29:24.997 lat (usec): min=2609, max=52756, avg=8219.75, stdev=3518.97 00:29:24.997 clat percentiles (usec): 00:29:24.997 | 1.00th=[ 5276], 5.00th=[ 5997], 10.00th=[ 6325], 20.00th=[ 6783], 00:29:24.997 | 30.00th=[ 7111], 40.00th=[ 7439], 50.00th=[ 7767], 60.00th=[ 8029], 00:29:24.997 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9896], 95.00th=[10814], 00:29:24.997 | 99.00th=[15664], 99.50th=[46400], 99.90th=[52167], 99.95th=[52691], 00:29:24.997 | 99.99th=[52691] 00:29:24.997 bw ( KiB/s): min=82912, max=99161, per=88.13%, avg=91526.25, stdev=6787.18, samples=4 00:29:24.997 iops : min= 5182, max= 6197, avg=5720.25, stdev=423.99, samples=4 00:29:24.997 lat (msec) : 2=0.04%, 4=2.08%, 10=89.93%, 20=7.57%, 50=0.18% 00:29:24.997 lat (msec) : 100=0.21% 00:29:24.997 cpu : usr=83.18%, sys=14.77%, ctx=24, majf=0, minf=1 00:29:24.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:29:24.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:24.997 issued rwts: total=21768,11515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.997 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:24.997 00:29:24.997 Run status group 0 (all jobs): 00:29:24.997 READ: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=340MiB (357MB), run=2004-2004msec 00:29:24.997 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=180MiB (189MB), run=1774-1774msec 00:29:24.997 23:14:57 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:24.997 23:14:57 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:24.997 23:14:57 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:24.997 23:14:57 -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:24.997 23:14:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:24.997 23:14:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:24.997 23:14:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:24.997 23:14:57 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:24.997 23:14:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:24.997 23:14:57 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:24.997 23:14:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:29:24.997 23:14:57 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 10.0.0.2 00:29:28.287 Nvme0n1 00:29:28.287 23:15:00 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:33.601 23:15:05 -- host/fio.sh@53 -- # ls_guid=3d0c0d07-e8e3-4b24-ae40-c08666e33bca 00:29:33.601 23:15:05 -- host/fio.sh@54 -- # get_lvs_free_mb 3d0c0d07-e8e3-4b24-ae40-c08666e33bca 00:29:33.601 23:15:05 -- common/autotest_common.sh@1343 -- # local lvs_uuid=3d0c0d07-e8e3-4b24-ae40-c08666e33bca 00:29:33.601 23:15:05 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:33.601 23:15:05 -- common/autotest_common.sh@1345 -- # local fc 00:29:33.601 23:15:05 -- common/autotest_common.sh@1346 -- # local cs 00:29:33.601 23:15:05 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:33.601 23:15:05 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:33.601 { 00:29:33.601 "uuid": "3d0c0d07-e8e3-4b24-ae40-c08666e33bca", 00:29:33.601 "name": "lvs_0", 00:29:33.601 "base_bdev": "Nvme0n1", 00:29:33.601 "total_data_clusters": 1489, 00:29:33.601 "free_clusters": 1489, 00:29:33.601 "block_size": 512, 00:29:33.601 "cluster_size": 1073741824 00:29:33.601 } 00:29:33.601 ]' 00:29:33.601 23:15:05 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="3d0c0d07-e8e3-4b24-ae40-c08666e33bca") .free_clusters' 00:29:33.601 23:15:05 -- common/autotest_common.sh@1348 -- # fc=1489 00:29:33.601 23:15:05 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="3d0c0d07-e8e3-4b24-ae40-c08666e33bca") .cluster_size' 00:29:33.601 23:15:05 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:29:33.601 23:15:05 -- common/autotest_common.sh@1352 -- # free_mb=1524736 00:29:33.601 23:15:05 -- common/autotest_common.sh@1353 -- # echo 1524736 00:29:33.601 1524736 00:29:33.601 23:15:05 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1524736 00:29:33.601 e427724c-bcab-486b-8881-b6e0476aa212 00:29:33.601 23:15:05 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:33.601 23:15:05 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:33.601 23:15:05 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:33.870 23:15:06 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:33.870 23:15:06 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:33.870 23:15:06 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:33.870 23:15:06 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:33.870 23:15:06 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:33.870 23:15:06 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:33.870 23:15:06 -- common/autotest_common.sh@1320 -- # shift 00:29:33.870 23:15:06 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:33.870 23:15:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:33.870 23:15:06 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:33.870 23:15:06 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:33.870 23:15:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:33.870 23:15:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:33.870 23:15:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:33.870 23:15:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:33.870 23:15:06 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:33.870 23:15:06 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:33.870 23:15:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:33.871 23:15:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:33.871 23:15:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:33.871 23:15:06 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:33.871 23:15:06 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:34.137 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:34.137 fio-3.35 00:29:34.137 Starting 1 thread 00:29:34.137 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.670 00:29:36.670 test: (groupid=0, jobs=1): err= 0: pid=3369136: Wed Jul 24 23:15:08 2024 00:29:36.670 read: IOPS=8269, BW=32.3MiB/s (33.9MB/s)(64.8MiB/2006msec) 00:29:36.670 slat (nsec): min=1514, max=95835, avg=1644.13, stdev=1130.88 00:29:36.670 clat (usec): min=472, max=270518, avg=8361.43, stdev=15599.56 00:29:36.670 lat (usec): min=473, max=270521, avg=8363.08, stdev=15599.61 00:29:36.670 clat percentiles (msec): 00:29:36.670 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 7], 00:29:36.670 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 8], 00:29:36.670 | 70.00th=[ 8], 80.00th=[ 8], 90.00th=[ 9], 95.00th=[ 9], 00:29:36.670 | 99.00th=[ 9], 99.50th=[ 10], 99.90th=[ 271], 99.95th=[ 271], 00:29:36.670 | 99.99th=[ 271] 00:29:36.670 bw ( KiB/s): min=16832, max=38744, per=99.86%, avg=33032.00, stdev=10802.67, samples=4 00:29:36.670 iops : min= 4208, max= 9686, avg=8258.00, stdev=2700.67, samples=4 00:29:36.670 write: IOPS=8272, BW=32.3MiB/s (33.9MB/s)(64.8MiB/2006msec); 0 zone resets 00:29:36.670 slat (nsec): min=1584, max=79632, avg=1725.38, stdev=676.79 00:29:36.670 clat (usec): min=395, max=268847, avg=6981.48, stdev=16624.02 00:29:36.670 lat (usec): min=397, max=268851, avg=6983.21, stdev=16624.12 00:29:36.670 clat percentiles (msec): 00:29:36.670 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:29:36.670 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 7], 00:29:36.670 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 7], 00:29:36.670 | 99.00th=[ 8], 99.50th=[ 10], 99.90th=[ 271], 99.95th=[ 271], 00:29:36.670 | 99.99th=[ 271] 00:29:36.670 bw ( KiB/s): min=17784, max=38480, per=99.97%, avg=33080.00, stdev=10199.84, samples=4 00:29:36.670 iops : min= 4446, max= 9620, avg=8270.00, stdev=2549.96, samples=4 00:29:36.670 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:29:36.670 lat (msec) : 2=0.09%, 4=0.20%, 10=99.20%, 20=0.08%, 500=0.39% 00:29:36.670 cpu : usr=61.40%, sys=34.61%, ctx=121, majf=0, minf=4 00:29:36.670 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:36.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:36.670 issued rwts: total=16589,16594,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.670 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:36.670 00:29:36.670 Run status group 0 (all jobs): 00:29:36.670 READ: bw=32.3MiB/s (33.9MB/s), 32.3MiB/s-32.3MiB/s (33.9MB/s-33.9MB/s), io=64.8MiB (67.9MB), run=2006-2006msec 00:29:36.670 WRITE: bw=32.3MiB/s (33.9MB/s), 32.3MiB/s-32.3MiB/s (33.9MB/s-33.9MB/s), io=64.8MiB (68.0MB), run=2006-2006msec 00:29:36.670 23:15:08 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:36.670 23:15:08 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:37.610 23:15:09 -- host/fio.sh@64 -- # ls_nested_guid=39bdab83-328b-4de3-9c74-3af80efc7765 00:29:37.610 23:15:09 -- host/fio.sh@65 -- # get_lvs_free_mb 39bdab83-328b-4de3-9c74-3af80efc7765 00:29:37.610 23:15:09 -- common/autotest_common.sh@1343 -- # local lvs_uuid=39bdab83-328b-4de3-9c74-3af80efc7765 00:29:37.610 23:15:09 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:37.610 23:15:09 -- common/autotest_common.sh@1345 -- # local fc 00:29:37.610 23:15:09 -- common/autotest_common.sh@1346 -- # local cs 00:29:37.610 23:15:09 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:37.868 23:15:10 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:37.868 { 00:29:37.868 "uuid": "3d0c0d07-e8e3-4b24-ae40-c08666e33bca", 00:29:37.868 "name": "lvs_0", 00:29:37.868 "base_bdev": "Nvme0n1", 00:29:37.868 "total_data_clusters": 1489, 00:29:37.868 "free_clusters": 0, 00:29:37.868 "block_size": 512, 00:29:37.868 "cluster_size": 1073741824 00:29:37.868 }, 00:29:37.868 { 00:29:37.868 "uuid": "39bdab83-328b-4de3-9c74-3af80efc7765", 00:29:37.868 "name": "lvs_n_0", 00:29:37.869 "base_bdev": "e427724c-bcab-486b-8881-b6e0476aa212", 00:29:37.869 "total_data_clusters": 380811, 00:29:37.869 "free_clusters": 380811, 00:29:37.869 "block_size": 512, 00:29:37.869 "cluster_size": 4194304 00:29:37.869 } 00:29:37.869 ]' 00:29:37.869 23:15:10 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="39bdab83-328b-4de3-9c74-3af80efc7765") .free_clusters' 00:29:37.869 23:15:10 -- common/autotest_common.sh@1348 -- # fc=380811 00:29:37.869 23:15:10 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="39bdab83-328b-4de3-9c74-3af80efc7765") .cluster_size' 00:29:37.869 23:15:10 -- common/autotest_common.sh@1349 -- # cs=4194304 00:29:37.869 23:15:10 -- common/autotest_common.sh@1352 -- # free_mb=1523244 00:29:37.869 23:15:10 -- common/autotest_common.sh@1353 -- # echo 1523244 00:29:37.869 1523244 00:29:37.869 23:15:10 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1523244 00:29:38.436 6e13796a-d74d-4c6a-8e84-d16ecb70280e 00:29:38.695 23:15:10 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:38.695 23:15:11 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:38.954 23:15:11 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:39.213 23:15:11 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:39.213 23:15:11 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:39.213 23:15:11 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:39.213 23:15:11 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:39.213 23:15:11 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:39.213 23:15:11 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:39.213 23:15:11 -- common/autotest_common.sh@1320 -- # shift 00:29:39.213 23:15:11 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:39.213 23:15:11 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:39.213 23:15:11 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:39.213 23:15:11 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:39.213 23:15:11 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:39.213 23:15:11 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:39.213 23:15:11 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:39.213 23:15:11 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:39.213 23:15:11 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:39.213 23:15:11 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:39.213 23:15:11 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:39.213 23:15:11 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:39.213 23:15:11 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:39.213 23:15:11 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:39.213 23:15:11 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:39.472 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:39.472 fio-3.35 00:29:39.472 Starting 1 thread 00:29:39.472 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.006 00:29:42.006 test: (groupid=0, jobs=1): err= 0: pid=3370098: Wed Jul 24 23:15:14 2024 00:29:42.006 read: IOPS=8470, BW=33.1MiB/s (34.7MB/s)(66.4MiB/2007msec) 00:29:42.006 slat (nsec): min=1501, max=92969, avg=1597.95, stdev=1025.55 00:29:42.006 clat (usec): min=2777, max=14131, avg=8342.97, stdev=673.01 00:29:42.006 lat (usec): min=2794, max=14132, avg=8344.56, stdev=672.96 00:29:42.006 clat percentiles (usec): 00:29:42.006 | 1.00th=[ 6783], 5.00th=[ 7308], 10.00th=[ 7504], 20.00th=[ 7832], 00:29:42.006 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8356], 60.00th=[ 8455], 00:29:42.006 | 70.00th=[ 8717], 80.00th=[ 8848], 90.00th=[ 9110], 95.00th=[ 9372], 00:29:42.006 | 99.00th=[ 9765], 99.50th=[10028], 99.90th=[10945], 99.95th=[13042], 00:29:42.006 | 99.99th=[14091] 00:29:42.006 bw ( KiB/s): min=32375, max=34560, per=99.90%, avg=33849.75, stdev=1002.96, samples=4 00:29:42.006 iops : min= 8093, max= 8640, avg=8462.25, stdev=251.11, samples=4 00:29:42.006 write: IOPS=8468, BW=33.1MiB/s (34.7MB/s)(66.4MiB/2007msec); 0 zone resets 00:29:42.006 slat (nsec): min=1549, max=85134, avg=1685.31, stdev=702.13 00:29:42.006 clat (usec): min=1589, max=13297, avg=6631.21, stdev=602.54 00:29:42.006 lat (usec): min=1595, max=13299, avg=6632.89, stdev=602.52 00:29:42.006 clat percentiles (usec): 00:29:42.006 | 1.00th=[ 5276], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6194], 00:29:42.006 | 30.00th=[ 6325], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6783], 00:29:42.006 | 70.00th=[ 6915], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7570], 00:29:42.006 | 99.00th=[ 7898], 99.50th=[ 8094], 99.90th=[10814], 99.95th=[12125], 00:29:42.006 | 99.99th=[13304] 00:29:42.006 bw ( KiB/s): min=33277, max=34136, per=99.95%, avg=33855.25, stdev=395.39, samples=4 00:29:42.006 iops : min= 8319, max= 8534, avg=8463.75, stdev=98.97, samples=4 00:29:42.006 lat (msec) : 2=0.01%, 4=0.10%, 10=99.57%, 20=0.32% 00:29:42.006 cpu : usr=59.12%, sys=36.14%, ctx=79, majf=0, minf=4 00:29:42.006 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:42.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:42.006 issued rwts: total=17000,16996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.006 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:42.006 00:29:42.006 Run status group 0 (all jobs): 00:29:42.006 READ: bw=33.1MiB/s (34.7MB/s), 33.1MiB/s-33.1MiB/s (34.7MB/s-34.7MB/s), io=66.4MiB (69.6MB), run=2007-2007msec 00:29:42.006 WRITE: bw=33.1MiB/s (34.7MB/s), 33.1MiB/s-33.1MiB/s (34.7MB/s-34.7MB/s), io=66.4MiB (69.6MB), run=2007-2007msec 00:29:42.006 23:15:14 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:42.006 23:15:14 -- host/fio.sh@74 -- # sync 00:29:42.006 23:15:14 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:48.571 23:15:19 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:48.572 23:15:20 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:52.761 23:15:24 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:52.761 23:15:24 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:55.330 23:15:27 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:55.330 23:15:27 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:55.330 23:15:27 -- host/fio.sh@86 -- # nvmftestfini 00:29:55.330 23:15:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:55.330 23:15:27 -- nvmf/common.sh@116 -- # sync 00:29:55.330 23:15:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:55.330 23:15:27 -- nvmf/common.sh@119 -- # set +e 00:29:55.330 23:15:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:55.330 23:15:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:55.330 rmmod nvme_tcp 00:29:55.330 rmmod nvme_fabrics 00:29:55.330 rmmod nvme_keyring 00:29:55.330 23:15:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:55.330 23:15:27 -- nvmf/common.sh@123 -- # set -e 00:29:55.330 23:15:27 -- nvmf/common.sh@124 -- # return 0 00:29:55.330 23:15:27 -- nvmf/common.sh@477 -- # '[' -n 3365264 ']' 00:29:55.330 23:15:27 -- nvmf/common.sh@478 -- # killprocess 3365264 00:29:55.330 23:15:27 -- common/autotest_common.sh@926 -- # '[' -z 3365264 ']' 00:29:55.330 23:15:27 -- common/autotest_common.sh@930 -- # kill -0 3365264 00:29:55.330 23:15:27 -- common/autotest_common.sh@931 -- # uname 00:29:55.330 23:15:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:55.330 23:15:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3365264 00:29:55.330 23:15:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:55.330 23:15:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:55.330 23:15:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3365264' 00:29:55.330 killing process with pid 3365264 00:29:55.330 23:15:27 -- common/autotest_common.sh@945 -- # kill 3365264 00:29:55.330 23:15:27 -- common/autotest_common.sh@950 -- # wait 3365264 00:29:55.590 23:15:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:55.590 23:15:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:55.590 23:15:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:55.590 23:15:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:55.590 23:15:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:55.590 23:15:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.590 23:15:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:55.590 23:15:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.517 23:15:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:57.517 00:29:57.517 real 0m46.965s 00:29:57.517 user 3m13.858s 00:29:57.517 sys 0m10.685s 00:29:57.517 23:15:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:57.517 23:15:29 -- common/autotest_common.sh@10 -- # set +x 00:29:57.517 ************************************ 00:29:57.517 END TEST nvmf_fio_host 00:29:57.517 ************************************ 00:29:57.777 23:15:29 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:57.777 23:15:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:57.777 23:15:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:57.777 23:15:29 -- common/autotest_common.sh@10 -- # set +x 00:29:57.777 ************************************ 00:29:57.777 START TEST nvmf_failover 00:29:57.777 ************************************ 00:29:57.777 23:15:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:57.777 * Looking for test storage... 00:29:57.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:57.777 23:15:30 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.777 23:15:30 -- nvmf/common.sh@7 -- # uname -s 00:29:57.777 23:15:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.777 23:15:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.777 23:15:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.777 23:15:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.777 23:15:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.777 23:15:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.777 23:15:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.777 23:15:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.777 23:15:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.777 23:15:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.777 23:15:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:57.777 23:15:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:57.777 23:15:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.777 23:15:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.777 23:15:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.777 23:15:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.777 23:15:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.777 23:15:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.777 23:15:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.777 23:15:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.777 23:15:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.777 23:15:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.777 23:15:30 -- paths/export.sh@5 -- # export PATH 00:29:57.777 23:15:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.777 23:15:30 -- nvmf/common.sh@46 -- # : 0 00:29:57.777 23:15:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:57.777 23:15:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:57.777 23:15:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:57.777 23:15:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.777 23:15:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.777 23:15:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:57.777 23:15:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:57.777 23:15:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:57.777 23:15:30 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:57.777 23:15:30 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:57.777 23:15:30 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:57.777 23:15:30 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:57.777 23:15:30 -- host/failover.sh@18 -- # nvmftestinit 00:29:57.777 23:15:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:57.777 23:15:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.777 23:15:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:57.777 23:15:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:57.777 23:15:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:57.777 23:15:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.777 23:15:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:57.777 23:15:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.777 23:15:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:57.777 23:15:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:57.777 23:15:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:57.777 23:15:30 -- common/autotest_common.sh@10 -- # set +x 00:30:04.348 23:15:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:04.348 23:15:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:04.348 23:15:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:04.348 23:15:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:04.348 23:15:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:04.348 23:15:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:04.348 23:15:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:04.348 23:15:36 -- nvmf/common.sh@294 -- # net_devs=() 00:30:04.348 23:15:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:04.348 23:15:36 -- nvmf/common.sh@295 -- # e810=() 00:30:04.348 23:15:36 -- nvmf/common.sh@295 -- # local -ga e810 00:30:04.348 23:15:36 -- nvmf/common.sh@296 -- # x722=() 00:30:04.348 23:15:36 -- nvmf/common.sh@296 -- # local -ga x722 00:30:04.348 23:15:36 -- nvmf/common.sh@297 -- # mlx=() 00:30:04.348 23:15:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:04.348 23:15:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:04.348 23:15:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:04.348 23:15:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:04.348 23:15:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:04.348 23:15:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:04.348 23:15:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:04.348 23:15:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:04.348 23:15:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:04.348 23:15:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:04.348 23:15:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:04.348 23:15:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:04.348 23:15:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:04.348 23:15:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:04.348 23:15:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:04.348 23:15:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:04.348 23:15:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:04.348 23:15:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:04.348 23:15:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:04.348 23:15:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:04.348 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:04.348 23:15:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:04.348 23:15:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:04.348 23:15:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.348 23:15:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.348 23:15:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:04.348 23:15:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:04.348 23:15:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:04.348 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:04.348 23:15:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:04.348 23:15:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:04.348 23:15:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.348 23:15:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.348 23:15:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:04.348 23:15:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:04.348 23:15:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:04.348 23:15:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:04.348 23:15:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:04.348 23:15:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.348 23:15:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:04.348 23:15:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.348 23:15:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:04.348 Found net devices under 0000:af:00.0: cvl_0_0 00:30:04.348 23:15:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.348 23:15:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:04.348 23:15:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.348 23:15:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:04.348 23:15:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.348 23:15:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:04.348 Found net devices under 0000:af:00.1: cvl_0_1 00:30:04.348 23:15:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.348 23:15:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:04.348 23:15:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:04.348 23:15:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:04.348 23:15:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:04.348 23:15:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:04.348 23:15:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.348 23:15:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.348 23:15:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:04.348 23:15:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:04.348 23:15:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:04.348 23:15:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:04.348 23:15:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:04.348 23:15:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:04.348 23:15:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.348 23:15:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:04.348 23:15:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:04.348 23:15:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.348 23:15:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:04.348 23:15:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:04.348 23:15:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:04.348 23:15:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:04.348 23:15:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:04.348 23:15:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:04.348 23:15:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:04.348 23:15:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:04.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:04.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:30:04.348 00:30:04.348 --- 10.0.0.2 ping statistics --- 00:30:04.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.349 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:30:04.349 23:15:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:04.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:04.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:30:04.349 00:30:04.349 --- 10.0.0.1 ping statistics --- 00:30:04.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.349 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:30:04.349 23:15:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:04.349 23:15:36 -- nvmf/common.sh@410 -- # return 0 00:30:04.349 23:15:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:04.349 23:15:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:04.349 23:15:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:04.349 23:15:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:04.349 23:15:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:04.349 23:15:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:04.349 23:15:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:04.349 23:15:36 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:04.349 23:15:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:04.349 23:15:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:04.349 23:15:36 -- common/autotest_common.sh@10 -- # set +x 00:30:04.349 23:15:36 -- nvmf/common.sh@469 -- # nvmfpid=3376429 00:30:04.349 23:15:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:04.349 23:15:36 -- nvmf/common.sh@470 -- # waitforlisten 3376429 00:30:04.349 23:15:36 -- common/autotest_common.sh@819 -- # '[' -z 3376429 ']' 00:30:04.349 23:15:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.349 23:15:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:04.349 23:15:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.349 23:15:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:04.349 23:15:36 -- common/autotest_common.sh@10 -- # set +x 00:30:04.349 [2024-07-24 23:15:36.747462] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:04.349 [2024-07-24 23:15:36.747513] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.608 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.608 [2024-07-24 23:15:36.822554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:04.608 [2024-07-24 23:15:36.860484] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:04.608 [2024-07-24 23:15:36.860594] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.608 [2024-07-24 23:15:36.860604] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.608 [2024-07-24 23:15:36.860617] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.608 [2024-07-24 23:15:36.860733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:04.608 [2024-07-24 23:15:36.860803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:04.608 [2024-07-24 23:15:36.860805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.176 23:15:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:05.176 23:15:37 -- common/autotest_common.sh@852 -- # return 0 00:30:05.176 23:15:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:05.176 23:15:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:05.176 23:15:37 -- common/autotest_common.sh@10 -- # set +x 00:30:05.176 23:15:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.176 23:15:37 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:05.435 [2024-07-24 23:15:37.743580] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.435 23:15:37 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:05.693 Malloc0 00:30:05.693 23:15:37 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:05.952 23:15:38 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:05.952 23:15:38 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.211 [2024-07-24 23:15:38.497748] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.211 23:15:38 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:06.470 [2024-07-24 23:15:38.658242] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:06.470 23:15:38 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:06.470 [2024-07-24 23:15:38.830843] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:06.470 23:15:38 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:06.470 23:15:38 -- host/failover.sh@31 -- # bdevperf_pid=3376731 00:30:06.470 23:15:38 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:06.470 23:15:38 -- host/failover.sh@34 -- # waitforlisten 3376731 /var/tmp/bdevperf.sock 00:30:06.470 23:15:38 -- common/autotest_common.sh@819 -- # '[' -z 3376731 ']' 00:30:06.470 23:15:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:06.470 23:15:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:06.470 23:15:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:06.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:06.470 23:15:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:06.470 23:15:38 -- common/autotest_common.sh@10 -- # set +x 00:30:07.405 23:15:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:07.405 23:15:39 -- common/autotest_common.sh@852 -- # return 0 00:30:07.405 23:15:39 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:07.663 NVMe0n1 00:30:07.663 23:15:39 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:07.921 00:30:07.921 23:15:40 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:07.921 23:15:40 -- host/failover.sh@39 -- # run_test_pid=3377002 00:30:07.921 23:15:40 -- host/failover.sh@41 -- # sleep 1 00:30:09.298 23:15:41 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:09.298 [2024-07-24 23:15:41.461399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 [2024-07-24 23:15:41.461760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a5230 is same with the state(5) to be set 00:30:09.298 23:15:41 -- host/failover.sh@45 -- # sleep 3 00:30:12.588 23:15:44 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:12.588 00:30:12.588 23:15:44 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:12.848 [2024-07-24 23:15:45.025572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.848 [2024-07-24 23:15:45.025603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.025995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 [2024-07-24 23:15:45.026199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6780 is same with the state(5) to be set 00:30:12.849 23:15:45 -- host/failover.sh@50 -- # sleep 3 00:30:16.178 23:15:48 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:16.178 [2024-07-24 23:15:48.213137] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.178 23:15:48 -- host/failover.sh@55 -- # sleep 1 00:30:17.115 23:15:49 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:17.115 [2024-07-24 23:15:49.400411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.115 [2024-07-24 23:15:49.400464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.115 [2024-07-24 23:15:49.400474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.115 [2024-07-24 23:15:49.400483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.115 [2024-07-24 23:15:49.400493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.400997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.401006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.401015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.401023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.401032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.401040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.401049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.401057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.401066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.401075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.401083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.401092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.401100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.401109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.401118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 [2024-07-24 23:15:49.401127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95ca10 is same with the state(5) to be set 00:30:17.116 23:15:49 -- host/failover.sh@59 -- # wait 3377002 00:30:23.693 0 00:30:23.693 23:15:55 -- host/failover.sh@61 -- # killprocess 3376731 00:30:23.693 23:15:55 -- common/autotest_common.sh@926 -- # '[' -z 3376731 ']' 00:30:23.693 23:15:55 -- common/autotest_common.sh@930 -- # kill -0 3376731 00:30:23.693 23:15:55 -- common/autotest_common.sh@931 -- # uname 00:30:23.693 23:15:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:23.693 23:15:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3376731 00:30:23.693 23:15:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:23.693 23:15:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:23.693 23:15:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3376731' 00:30:23.694 killing process with pid 3376731 00:30:23.694 23:15:55 -- common/autotest_common.sh@945 -- # kill 3376731 00:30:23.694 23:15:55 -- common/autotest_common.sh@950 -- # wait 3376731 00:30:23.694 23:15:55 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:23.694 [2024-07-24 23:15:38.891044] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:23.694 [2024-07-24 23:15:38.891097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3376731 ] 00:30:23.694 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.694 [2024-07-24 23:15:38.962688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.694 [2024-07-24 23:15:38.999180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.694 Running I/O for 15 seconds... 00:30:23.694 [2024-07-24 23:15:41.462117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.694 [2024-07-24 23:15:41.462735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.694 [2024-07-24 23:15:41.462746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.694 [2024-07-24 23:15:41.462754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.462765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.695 [2024-07-24 23:15:41.462774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.462785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.695 [2024-07-24 23:15:41.462794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.462804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.695 [2024-07-24 23:15:41.462815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.462826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.462836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.462846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.462855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.462866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.695 [2024-07-24 23:15:41.462875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.462887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.462896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.462907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.462916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.462926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.462935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.462946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.462955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.462966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.462975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.462986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.462995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.463006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.463015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.463026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.463035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.463045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.463056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.463067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.463076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.463087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.463095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.463106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.463115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.463126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.695 [2024-07-24 23:15:41.463137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.463148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.695 [2024-07-24 23:15:41.463158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.463168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.463178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.463188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.695 [2024-07-24 23:15:41.463197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.463207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.463216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.463227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.463236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.463247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.463255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.463266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.463275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.463285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.463295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.463305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.463314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.463324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.463334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.463344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.695 [2024-07-24 23:15:41.463353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.695 [2024-07-24 23:15:41.463364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.695 [2024-07-24 23:15:41.463373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.696 [2024-07-24 23:15:41.463394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.696 [2024-07-24 23:15:41.463433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.696 [2024-07-24 23:15:41.463833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.696 [2024-07-24 23:15:41.463854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.463984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.463993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.464003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.464012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.464023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.696 [2024-07-24 23:15:41.464032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.464043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.464051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.696 [2024-07-24 23:15:41.464062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.696 [2024-07-24 23:15:41.464071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.697 [2024-07-24 23:15:41.464092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.697 [2024-07-24 23:15:41.464112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.697 [2024-07-24 23:15:41.464134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.697 [2024-07-24 23:15:41.464160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.697 [2024-07-24 23:15:41.464180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.697 [2024-07-24 23:15:41.464201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.697 [2024-07-24 23:15:41.464222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.697 [2024-07-24 23:15:41.464243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.697 [2024-07-24 23:15:41.464265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.697 [2024-07-24 23:15:41.464285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.697 [2024-07-24 23:15:41.464307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.697 [2024-07-24 23:15:41.464328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.697 [2024-07-24 23:15:41.464347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.697 [2024-07-24 23:15:41.464367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.697 [2024-07-24 23:15:41.464387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.697 [2024-07-24 23:15:41.464408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.697 [2024-07-24 23:15:41.464430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.697 [2024-07-24 23:15:41.464449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.697 [2024-07-24 23:15:41.464469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.697 [2024-07-24 23:15:41.464489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.697 [2024-07-24 23:15:41.464509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.697 [2024-07-24 23:15:41.464528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.697 [2024-07-24 23:15:41.464548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.697 [2024-07-24 23:15:41.464568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.697 [2024-07-24 23:15:41.464587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.697 [2024-07-24 23:15:41.464607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.697 [2024-07-24 23:15:41.464627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.697 [2024-07-24 23:15:41.464648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.697 [2024-07-24 23:15:41.464670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.697 [2024-07-24 23:15:41.464681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.697 [2024-07-24 23:15:41.464689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:41.464700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:41.464709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:41.464723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17219a0 is same with the state(5) to be set 00:30:23.698 [2024-07-24 23:15:41.464735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:23.698 [2024-07-24 23:15:41.464742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:23.698 [2024-07-24 23:15:41.464752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23696 len:8 PRP1 0x0 PRP2 0x0 00:30:23.698 [2024-07-24 23:15:41.464761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:41.464806] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17219a0 was disconnected and freed. reset controller. 00:30:23.698 [2024-07-24 23:15:41.464823] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:23.698 [2024-07-24 23:15:41.464848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:23.698 [2024-07-24 23:15:41.464857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:41.464868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:23.698 [2024-07-24 23:15:41.464877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:41.464886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:23.698 [2024-07-24 23:15:41.464895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:41.464905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:23.698 [2024-07-24 23:15:41.464913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:41.464923] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.698 [2024-07-24 23:15:41.466675] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.698 [2024-07-24 23:15:41.466701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1702800 (9): Bad file descriptor 00:30:23.698 [2024-07-24 23:15:41.613452] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:23.698 [2024-07-24 23:15:45.024392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:23.698 [2024-07-24 23:15:45.024437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.024449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:23.698 [2024-07-24 23:15:45.024462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.024472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:23.698 [2024-07-24 23:15:45.024481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.024491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:23.698 [2024-07-24 23:15:45.024500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.024509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702800 is same with the state(5) to be set 00:30:23.698 [2024-07-24 23:15:45.026364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.698 [2024-07-24 23:15:45.026848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.698 [2024-07-24 23:15:45.026858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.026869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.699 [2024-07-24 23:15:45.026878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.026889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.699 [2024-07-24 23:15:45.026898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.026909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.699 [2024-07-24 23:15:45.026918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.026929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.699 [2024-07-24 23:15:45.026938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.026949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.699 [2024-07-24 23:15:45.026959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.026970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.699 [2024-07-24 23:15:45.026980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.026991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.699 [2024-07-24 23:15:45.027001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.027011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.699 [2024-07-24 23:15:45.027021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.027032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.699 [2024-07-24 23:15:45.027041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.027052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.699 [2024-07-24 23:15:45.027062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.027073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.699 [2024-07-24 23:15:45.027083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.027094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.699 [2024-07-24 23:15:45.027103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.027116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.699 [2024-07-24 23:15:45.027125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.027136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.699 [2024-07-24 23:15:45.027146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.027156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.699 [2024-07-24 23:15:45.027166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.027177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.699 [2024-07-24 23:15:45.027187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.027198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.699 [2024-07-24 23:15:45.027207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.027218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.699 [2024-07-24 23:15:45.027228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.027239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.699 [2024-07-24 23:15:45.027248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.027259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.699 [2024-07-24 23:15:45.027269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.027280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.699 [2024-07-24 23:15:45.027289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.027300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.699 [2024-07-24 23:15:45.027309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.027320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.699 [2024-07-24 23:15:45.027329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.699 [2024-07-24 23:15:45.027340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.699 [2024-07-24 23:15:45.027349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.700 [2024-07-24 23:15:45.027371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.700 [2024-07-24 23:15:45.027391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.700 [2024-07-24 23:15:45.027412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.700 [2024-07-24 23:15:45.027435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.700 [2024-07-24 23:15:45.027455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.700 [2024-07-24 23:15:45.027476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.700 [2024-07-24 23:15:45.027496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.700 [2024-07-24 23:15:45.027516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.700 [2024-07-24 23:15:45.027537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.700 [2024-07-24 23:15:45.027558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.700 [2024-07-24 23:15:45.027578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.700 [2024-07-24 23:15:45.027599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.700 [2024-07-24 23:15:45.027619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.700 [2024-07-24 23:15:45.027641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.700 [2024-07-24 23:15:45.027661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.700 [2024-07-24 23:15:45.027682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.700 [2024-07-24 23:15:45.027702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.700 [2024-07-24 23:15:45.027727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.700 [2024-07-24 23:15:45.027748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.700 [2024-07-24 23:15:45.027769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.700 [2024-07-24 23:15:45.027789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.700 [2024-07-24 23:15:45.027810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.700 [2024-07-24 23:15:45.027830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.700 [2024-07-24 23:15:45.027851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.700 [2024-07-24 23:15:45.027871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.700 [2024-07-24 23:15:45.027895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.700 [2024-07-24 23:15:45.027915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.700 [2024-07-24 23:15:45.027935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.700 [2024-07-24 23:15:45.027955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.700 [2024-07-24 23:15:45.027975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.027986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.700 [2024-07-24 23:15:45.027995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.700 [2024-07-24 23:15:45.028006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.701 [2024-07-24 23:15:45.028015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.701 [2024-07-24 23:15:45.028035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.701 [2024-07-24 23:15:45.028097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.701 [2024-07-24 23:15:45.028117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.701 [2024-07-24 23:15:45.028138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.701 [2024-07-24 23:15:45.028404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.701 [2024-07-24 23:15:45.028466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.701 [2024-07-24 23:15:45.028486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.701 [2024-07-24 23:15:45.028506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.701 [2024-07-24 23:15:45.028525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.701 [2024-07-24 23:15:45.028586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.701 [2024-07-24 23:15:45.028606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.701 [2024-07-24 23:15:45.028689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.701 [2024-07-24 23:15:45.028710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.701 [2024-07-24 23:15:45.028736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.701 [2024-07-24 23:15:45.028756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.701 [2024-07-24 23:15:45.028777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.701 [2024-07-24 23:15:45.028788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:45.028797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:45.028808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.702 [2024-07-24 23:15:45.028818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:45.028829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:45.028838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:45.028849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:45.028858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:45.028869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:45.028878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:45.028889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:45.028898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:45.028909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:45.028918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:45.028931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:45.028940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:45.028951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:45.028961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:45.028972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:45.028981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:45.028992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:45.029001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:45.029012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17227e0 is same with the state(5) to be set 00:30:23.702 [2024-07-24 23:15:45.029023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:23.702 [2024-07-24 23:15:45.029031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:23.702 [2024-07-24 23:15:45.029039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75192 len:8 PRP1 0x0 PRP2 0x0 00:30:23.702 [2024-07-24 23:15:45.029049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:45.029094] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17227e0 was disconnected and freed. reset controller. 00:30:23.702 [2024-07-24 23:15:45.029104] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:23.702 [2024-07-24 23:15:45.029115] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.702 [2024-07-24 23:15:45.030817] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.702 [2024-07-24 23:15:45.030842] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1702800 (9): Bad file descriptor 00:30:23.702 [2024-07-24 23:15:45.064804] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:23.702 [2024-07-24 23:15:49.400738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:23.702 [2024-07-24 23:15:49.400775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.400787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:23.702 [2024-07-24 23:15:49.400797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.400808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:23.702 [2024-07-24 23:15:49.400818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.400828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:23.702 [2024-07-24 23:15:49.400838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.400851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702800 is same with the state(5) to be set 00:30:23.702 [2024-07-24 23:15:49.401307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:49.401320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.401335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:49.401345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.401356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:49.401365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.401376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:49.401386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.401396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:49.401405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.401416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:49.401425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.401436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:49.401445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.401456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:49.401465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.401476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:49.401485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.401496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:49.401505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.401516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:49.401525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.401536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:49.401545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.401563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:49.401573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.401584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:49.401593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.401604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:49.401613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.401624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:49.401634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.401644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:49.401653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.401664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.702 [2024-07-24 23:15:49.401673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.702 [2024-07-24 23:15:49.401684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.401693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.401703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.401712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.401728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.401738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.401749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.401758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.401769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.401778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.401790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.401799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.401810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.401819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.401832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.401841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.401851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.401860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.401871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.401881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.401892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.401901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.401911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.401921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.401932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.401941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.401951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.401960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.401971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.401981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.401991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.402000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.402019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.402039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.402059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.402081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.402101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.402121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.402140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.703 [2024-07-24 23:15:49.402161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.703 [2024-07-24 23:15:49.402180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.703 [2024-07-24 23:15:49.402200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.402220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.402240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.402260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.402280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.402299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.402320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.402341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.402361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.703 [2024-07-24 23:15:49.402381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.402401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.703 [2024-07-24 23:15:49.402420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.703 [2024-07-24 23:15:49.402441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.703 [2024-07-24 23:15:49.402451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.703 [2024-07-24 23:15:49.402460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.704 [2024-07-24 23:15:49.402480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.704 [2024-07-24 23:15:49.402500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.704 [2024-07-24 23:15:49.402520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.704 [2024-07-24 23:15:49.402540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.704 [2024-07-24 23:15:49.402560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.704 [2024-07-24 23:15:49.402581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.704 [2024-07-24 23:15:49.402601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.704 [2024-07-24 23:15:49.402620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.704 [2024-07-24 23:15:49.402640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.704 [2024-07-24 23:15:49.402660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.704 [2024-07-24 23:15:49.402679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.704 [2024-07-24 23:15:49.402698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.704 [2024-07-24 23:15:49.402723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.704 [2024-07-24 23:15:49.402743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.704 [2024-07-24 23:15:49.402763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.704 [2024-07-24 23:15:49.402783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.704 [2024-07-24 23:15:49.402802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.704 [2024-07-24 23:15:49.402822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.704 [2024-07-24 23:15:49.402843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.704 [2024-07-24 23:15:49.402863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.704 [2024-07-24 23:15:49.402883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.704 [2024-07-24 23:15:49.402902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.704 [2024-07-24 23:15:49.402922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.704 [2024-07-24 23:15:49.402942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.704 [2024-07-24 23:15:49.402962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.704 [2024-07-24 23:15:49.402981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.402991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.704 [2024-07-24 23:15:49.403001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.403012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.704 [2024-07-24 23:15:49.403022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.403033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.704 [2024-07-24 23:15:49.403042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.403053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.704 [2024-07-24 23:15:49.403063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.704 [2024-07-24 23:15:49.403075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.705 [2024-07-24 23:15:49.403238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.705 [2024-07-24 23:15:49.403257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.705 [2024-07-24 23:15:49.403277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.705 [2024-07-24 23:15:49.403297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.705 [2024-07-24 23:15:49.403318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.705 [2024-07-24 23:15:49.403338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.705 [2024-07-24 23:15:49.403400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.705 [2024-07-24 23:15:49.403420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.705 [2024-07-24 23:15:49.403440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.705 [2024-07-24 23:15:49.403520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.705 [2024-07-24 23:15:49.403579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.705 [2024-07-24 23:15:49.403620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.705 [2024-07-24 23:15:49.403680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.705 [2024-07-24 23:15:49.403699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.705 [2024-07-24 23:15:49.403743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.705 [2024-07-24 23:15:49.403865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.705 [2024-07-24 23:15:49.403875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.706 [2024-07-24 23:15:49.403885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.706 [2024-07-24 23:15:49.403895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1730e10 is same with the state(5) to be set 00:30:23.706 [2024-07-24 23:15:49.403906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:23.706 [2024-07-24 23:15:49.403913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:23.706 [2024-07-24 23:15:49.403922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60200 len:8 PRP1 0x0 PRP2 0x0 00:30:23.706 [2024-07-24 23:15:49.403930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.706 [2024-07-24 23:15:49.403975] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1730e10 was disconnected and freed. reset controller. 00:30:23.706 [2024-07-24 23:15:49.403987] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:23.706 [2024-07-24 23:15:49.403997] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.706 [2024-07-24 23:15:49.405836] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.706 [2024-07-24 23:15:49.405863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1702800 (9): Bad file descriptor 00:30:23.706 [2024-07-24 23:15:49.478530] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:23.706 00:30:23.706 Latency(us) 00:30:23.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.706 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:23.706 Verification LBA range: start 0x0 length 0x4000 00:30:23.706 NVMe0n1 : 15.00 17565.16 68.61 1287.94 0.00 6777.02 684.85 15204.35 00:30:23.706 =================================================================================================================== 00:30:23.706 Total : 17565.16 68.61 1287.94 0.00 6777.02 684.85 15204.35 00:30:23.706 Received shutdown signal, test time was about 15.000000 seconds 00:30:23.706 00:30:23.706 Latency(us) 00:30:23.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.706 =================================================================================================================== 00:30:23.706 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:23.706 23:15:55 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:23.706 23:15:55 -- host/failover.sh@65 -- # count=3 00:30:23.706 23:15:55 -- host/failover.sh@67 -- # (( count != 3 )) 00:30:23.706 23:15:55 -- host/failover.sh@73 -- # bdevperf_pid=3379519 00:30:23.706 23:15:55 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:23.706 23:15:55 -- host/failover.sh@75 -- # waitforlisten 3379519 /var/tmp/bdevperf.sock 00:30:23.706 23:15:55 -- common/autotest_common.sh@819 -- # '[' -z 3379519 ']' 00:30:23.706 23:15:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:23.706 23:15:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:23.706 23:15:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:23.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:23.706 23:15:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:23.706 23:15:55 -- common/autotest_common.sh@10 -- # set +x 00:30:24.273 23:15:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:24.273 23:15:56 -- common/autotest_common.sh@852 -- # return 0 00:30:24.273 23:15:56 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:24.273 [2024-07-24 23:15:56.680913] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:24.531 23:15:56 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:24.531 [2024-07-24 23:15:56.853382] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:24.531 23:15:56 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:24.790 NVMe0n1 00:30:24.790 23:15:57 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:25.049 00:30:25.049 23:15:57 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:25.615 00:30:25.615 23:15:57 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:25.616 23:15:57 -- host/failover.sh@82 -- # grep -q NVMe0 00:30:25.616 23:15:58 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:25.874 23:15:58 -- host/failover.sh@87 -- # sleep 3 00:30:29.162 23:16:01 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:29.162 23:16:01 -- host/failover.sh@88 -- # grep -q NVMe0 00:30:29.162 23:16:01 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:29.162 23:16:01 -- host/failover.sh@90 -- # run_test_pid=3380505 00:30:29.162 23:16:01 -- host/failover.sh@92 -- # wait 3380505 00:30:30.099 0 00:30:30.099 23:16:02 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:30.099 [2024-07-24 23:15:55.720767] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:30.099 [2024-07-24 23:15:55.720826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3379519 ] 00:30:30.099 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.099 [2024-07-24 23:15:55.790880] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.099 [2024-07-24 23:15:55.827166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.099 [2024-07-24 23:15:58.168741] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:30.099 [2024-07-24 23:15:58.168784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.099 [2024-07-24 23:15:58.168797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.099 [2024-07-24 23:15:58.168809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.099 [2024-07-24 23:15:58.168818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.099 [2024-07-24 23:15:58.168827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.099 [2024-07-24 23:15:58.168836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.099 [2024-07-24 23:15:58.168846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.099 [2024-07-24 23:15:58.168855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.099 [2024-07-24 23:15:58.168864] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:30.099 [2024-07-24 23:15:58.168887] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:30.099 [2024-07-24 23:15:58.168902] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x969800 (9): Bad file descriptor 00:30:30.099 [2024-07-24 23:15:58.189508] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:30.099 Running I/O for 1 seconds... 00:30:30.099 00:30:30.099 Latency(us) 00:30:30.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.099 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:30.099 Verification LBA range: start 0x0 length 0x4000 00:30:30.100 NVMe0n1 : 1.01 17872.12 69.81 0.00 0.00 7132.53 1048.58 8598.32 00:30:30.100 =================================================================================================================== 00:30:30.100 Total : 17872.12 69.81 0.00 0.00 7132.53 1048.58 8598.32 00:30:30.100 23:16:02 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:30.100 23:16:02 -- host/failover.sh@95 -- # grep -q NVMe0 00:30:30.359 23:16:02 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:30.617 23:16:02 -- host/failover.sh@99 -- # grep -q NVMe0 00:30:30.617 23:16:02 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:30.617 23:16:03 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:30.876 23:16:03 -- host/failover.sh@101 -- # sleep 3 00:30:34.164 23:16:06 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:34.164 23:16:06 -- host/failover.sh@103 -- # grep -q NVMe0 00:30:34.164 23:16:06 -- host/failover.sh@108 -- # killprocess 3379519 00:30:34.164 23:16:06 -- common/autotest_common.sh@926 -- # '[' -z 3379519 ']' 00:30:34.164 23:16:06 -- common/autotest_common.sh@930 -- # kill -0 3379519 00:30:34.164 23:16:06 -- common/autotest_common.sh@931 -- # uname 00:30:34.164 23:16:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:34.164 23:16:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3379519 00:30:34.164 23:16:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:34.164 23:16:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:34.164 23:16:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3379519' 00:30:34.164 killing process with pid 3379519 00:30:34.164 23:16:06 -- common/autotest_common.sh@945 -- # kill 3379519 00:30:34.164 23:16:06 -- common/autotest_common.sh@950 -- # wait 3379519 00:30:34.164 23:16:06 -- host/failover.sh@110 -- # sync 00:30:34.164 23:16:06 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:34.423 23:16:06 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:34.423 23:16:06 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:34.423 23:16:06 -- host/failover.sh@116 -- # nvmftestfini 00:30:34.423 23:16:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:34.423 23:16:06 -- nvmf/common.sh@116 -- # sync 00:30:34.423 23:16:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:34.423 23:16:06 -- nvmf/common.sh@119 -- # set +e 00:30:34.423 23:16:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:34.423 23:16:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:34.423 rmmod nvme_tcp 00:30:34.423 rmmod nvme_fabrics 00:30:34.423 rmmod nvme_keyring 00:30:34.423 23:16:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:34.423 23:16:06 -- nvmf/common.sh@123 -- # set -e 00:30:34.423 23:16:06 -- nvmf/common.sh@124 -- # return 0 00:30:34.423 23:16:06 -- nvmf/common.sh@477 -- # '[' -n 3376429 ']' 00:30:34.423 23:16:06 -- nvmf/common.sh@478 -- # killprocess 3376429 00:30:34.423 23:16:06 -- common/autotest_common.sh@926 -- # '[' -z 3376429 ']' 00:30:34.423 23:16:06 -- common/autotest_common.sh@930 -- # kill -0 3376429 00:30:34.423 23:16:06 -- common/autotest_common.sh@931 -- # uname 00:30:34.423 23:16:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:34.423 23:16:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3376429 00:30:34.683 23:16:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:34.683 23:16:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:34.683 23:16:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3376429' 00:30:34.683 killing process with pid 3376429 00:30:34.683 23:16:06 -- common/autotest_common.sh@945 -- # kill 3376429 00:30:34.683 23:16:06 -- common/autotest_common.sh@950 -- # wait 3376429 00:30:34.683 23:16:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:34.683 23:16:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:34.683 23:16:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:34.683 23:16:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:34.683 23:16:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:34.683 23:16:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.683 23:16:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:34.683 23:16:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.265 23:16:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:37.266 00:30:37.266 real 0m39.182s 00:30:37.266 user 2m1.202s 00:30:37.266 sys 0m9.782s 00:30:37.266 23:16:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:37.266 23:16:09 -- common/autotest_common.sh@10 -- # set +x 00:30:37.266 ************************************ 00:30:37.266 END TEST nvmf_failover 00:30:37.266 ************************************ 00:30:37.266 23:16:09 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:37.266 23:16:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:37.266 23:16:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:37.266 23:16:09 -- common/autotest_common.sh@10 -- # set +x 00:30:37.266 ************************************ 00:30:37.266 START TEST nvmf_discovery 00:30:37.266 ************************************ 00:30:37.266 23:16:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:37.266 * Looking for test storage... 00:30:37.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:37.266 23:16:09 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:37.266 23:16:09 -- nvmf/common.sh@7 -- # uname -s 00:30:37.266 23:16:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:37.266 23:16:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:37.266 23:16:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:37.266 23:16:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:37.266 23:16:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:37.266 23:16:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:37.266 23:16:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:37.266 23:16:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:37.266 23:16:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:37.266 23:16:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:37.266 23:16:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:37.266 23:16:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:37.266 23:16:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:37.266 23:16:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:37.266 23:16:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:37.266 23:16:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:37.266 23:16:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:37.266 23:16:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:37.266 23:16:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:37.266 23:16:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.266 23:16:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.266 23:16:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.266 23:16:09 -- paths/export.sh@5 -- # export PATH 00:30:37.266 23:16:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.266 23:16:09 -- nvmf/common.sh@46 -- # : 0 00:30:37.266 23:16:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:37.266 23:16:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:37.266 23:16:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:37.266 23:16:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:37.266 23:16:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:37.266 23:16:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:37.266 23:16:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:37.266 23:16:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:37.266 23:16:09 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:37.266 23:16:09 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:37.266 23:16:09 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:37.266 23:16:09 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:37.266 23:16:09 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:37.266 23:16:09 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:37.266 23:16:09 -- host/discovery.sh@25 -- # nvmftestinit 00:30:37.266 23:16:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:37.266 23:16:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:37.266 23:16:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:37.266 23:16:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:37.266 23:16:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:37.266 23:16:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.266 23:16:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:37.266 23:16:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.266 23:16:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:37.266 23:16:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:37.266 23:16:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:37.266 23:16:09 -- common/autotest_common.sh@10 -- # set +x 00:30:43.832 23:16:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:43.832 23:16:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:43.832 23:16:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:43.832 23:16:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:43.832 23:16:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:43.832 23:16:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:43.832 23:16:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:43.832 23:16:15 -- nvmf/common.sh@294 -- # net_devs=() 00:30:43.832 23:16:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:43.832 23:16:15 -- nvmf/common.sh@295 -- # e810=() 00:30:43.832 23:16:15 -- nvmf/common.sh@295 -- # local -ga e810 00:30:43.832 23:16:15 -- nvmf/common.sh@296 -- # x722=() 00:30:43.832 23:16:15 -- nvmf/common.sh@296 -- # local -ga x722 00:30:43.832 23:16:15 -- nvmf/common.sh@297 -- # mlx=() 00:30:43.832 23:16:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:43.832 23:16:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:43.832 23:16:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:43.832 23:16:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:43.832 23:16:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:43.832 23:16:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:43.832 23:16:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:43.832 23:16:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:43.832 23:16:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:43.832 23:16:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:43.832 23:16:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:43.832 23:16:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:43.832 23:16:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:43.832 23:16:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:43.832 23:16:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:43.832 23:16:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:43.832 23:16:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:43.832 23:16:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:43.832 23:16:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:43.832 23:16:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:43.832 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:43.832 23:16:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:43.832 23:16:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:43.832 23:16:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.832 23:16:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.832 23:16:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:43.832 23:16:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:43.832 23:16:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:43.832 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:43.832 23:16:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:43.832 23:16:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:43.832 23:16:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.832 23:16:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.832 23:16:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:43.832 23:16:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:43.832 23:16:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:43.832 23:16:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:43.832 23:16:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:43.832 23:16:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.832 23:16:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:43.832 23:16:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.832 23:16:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:43.832 Found net devices under 0000:af:00.0: cvl_0_0 00:30:43.832 23:16:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.832 23:16:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:43.832 23:16:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.832 23:16:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:43.832 23:16:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.832 23:16:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:43.832 Found net devices under 0000:af:00.1: cvl_0_1 00:30:43.832 23:16:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.832 23:16:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:43.832 23:16:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:43.832 23:16:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:43.832 23:16:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:43.832 23:16:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:43.832 23:16:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:43.832 23:16:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:43.832 23:16:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:43.832 23:16:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:43.832 23:16:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:43.832 23:16:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:43.832 23:16:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:43.832 23:16:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:43.832 23:16:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:43.832 23:16:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:43.832 23:16:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:43.832 23:16:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:43.832 23:16:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:43.832 23:16:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:43.832 23:16:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:43.832 23:16:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:43.832 23:16:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:43.832 23:16:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:43.832 23:16:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:43.832 23:16:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:43.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:43.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:30:43.832 00:30:43.832 --- 10.0.0.2 ping statistics --- 00:30:43.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.832 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:30:43.832 23:16:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:43.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:43.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:30:43.832 00:30:43.832 --- 10.0.0.1 ping statistics --- 00:30:43.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.832 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:30:43.832 23:16:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:43.832 23:16:16 -- nvmf/common.sh@410 -- # return 0 00:30:43.832 23:16:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:43.832 23:16:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:43.832 23:16:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:43.832 23:16:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:43.832 23:16:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:43.832 23:16:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:43.832 23:16:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:43.832 23:16:16 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:43.832 23:16:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:43.832 23:16:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:43.832 23:16:16 -- common/autotest_common.sh@10 -- # set +x 00:30:43.832 23:16:16 -- nvmf/common.sh@469 -- # nvmfpid=3385020 00:30:43.833 23:16:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:43.833 23:16:16 -- nvmf/common.sh@470 -- # waitforlisten 3385020 00:30:43.833 23:16:16 -- common/autotest_common.sh@819 -- # '[' -z 3385020 ']' 00:30:43.833 23:16:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.833 23:16:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:43.833 23:16:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.833 23:16:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:43.833 23:16:16 -- common/autotest_common.sh@10 -- # set +x 00:30:43.833 [2024-07-24 23:16:16.238600] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:43.833 [2024-07-24 23:16:16.238646] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:44.091 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.091 [2024-07-24 23:16:16.313066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.091 [2024-07-24 23:16:16.349386] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:44.091 [2024-07-24 23:16:16.349518] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:44.091 [2024-07-24 23:16:16.349528] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:44.091 [2024-07-24 23:16:16.349537] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:44.091 [2024-07-24 23:16:16.349563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.659 23:16:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:44.659 23:16:17 -- common/autotest_common.sh@852 -- # return 0 00:30:44.659 23:16:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:44.659 23:16:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:44.659 23:16:17 -- common/autotest_common.sh@10 -- # set +x 00:30:44.659 23:16:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:44.659 23:16:17 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:44.659 23:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:44.659 23:16:17 -- common/autotest_common.sh@10 -- # set +x 00:30:44.659 [2024-07-24 23:16:17.067414] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.659 23:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:44.659 23:16:17 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:44.659 23:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:44.659 23:16:17 -- common/autotest_common.sh@10 -- # set +x 00:30:44.660 [2024-07-24 23:16:17.075604] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:44.660 23:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:44.660 23:16:17 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:44.660 23:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:44.660 23:16:17 -- common/autotest_common.sh@10 -- # set +x 00:30:44.660 null0 00:30:44.660 23:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:44.660 23:16:17 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:44.660 23:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:44.660 23:16:17 -- common/autotest_common.sh@10 -- # set +x 00:30:44.921 null1 00:30:44.921 23:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:44.921 23:16:17 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:44.921 23:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:44.921 23:16:17 -- common/autotest_common.sh@10 -- # set +x 00:30:44.921 23:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:44.921 23:16:17 -- host/discovery.sh@45 -- # hostpid=3385291 00:30:44.921 23:16:17 -- host/discovery.sh@46 -- # waitforlisten 3385291 /tmp/host.sock 00:30:44.921 23:16:17 -- common/autotest_common.sh@819 -- # '[' -z 3385291 ']' 00:30:44.921 23:16:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:30:44.921 23:16:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:44.921 23:16:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:44.921 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:44.921 23:16:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:44.921 23:16:17 -- common/autotest_common.sh@10 -- # set +x 00:30:44.921 23:16:17 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:44.921 [2024-07-24 23:16:17.150641] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:44.921 [2024-07-24 23:16:17.150690] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385291 ] 00:30:44.921 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.921 [2024-07-24 23:16:17.222680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.921 [2024-07-24 23:16:17.260700] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:44.921 [2024-07-24 23:16:17.260860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.859 23:16:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:45.859 23:16:17 -- common/autotest_common.sh@852 -- # return 0 00:30:45.859 23:16:17 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:45.859 23:16:17 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:45.859 23:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.859 23:16:17 -- common/autotest_common.sh@10 -- # set +x 00:30:45.859 23:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.859 23:16:17 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:45.859 23:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.859 23:16:17 -- common/autotest_common.sh@10 -- # set +x 00:30:45.859 23:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.859 23:16:17 -- host/discovery.sh@72 -- # notify_id=0 00:30:45.859 23:16:17 -- host/discovery.sh@78 -- # get_subsystem_names 00:30:45.859 23:16:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:45.859 23:16:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:45.859 23:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.859 23:16:17 -- host/discovery.sh@59 -- # sort 00:30:45.859 23:16:17 -- common/autotest_common.sh@10 -- # set +x 00:30:45.859 23:16:17 -- host/discovery.sh@59 -- # xargs 00:30:45.859 23:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.859 23:16:17 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:30:45.859 23:16:18 -- host/discovery.sh@79 -- # get_bdev_list 00:30:45.859 23:16:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:45.859 23:16:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:45.859 23:16:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.859 23:16:18 -- common/autotest_common.sh@10 -- # set +x 00:30:45.859 23:16:18 -- host/discovery.sh@55 -- # sort 00:30:45.859 23:16:18 -- host/discovery.sh@55 -- # xargs 00:30:45.859 23:16:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.859 23:16:18 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:30:45.859 23:16:18 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:45.859 23:16:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.859 23:16:18 -- common/autotest_common.sh@10 -- # set +x 00:30:45.859 23:16:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.859 23:16:18 -- host/discovery.sh@82 -- # get_subsystem_names 00:30:45.859 23:16:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:45.859 23:16:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:45.859 23:16:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.859 23:16:18 -- common/autotest_common.sh@10 -- # set +x 00:30:45.859 23:16:18 -- host/discovery.sh@59 -- # sort 00:30:45.859 23:16:18 -- host/discovery.sh@59 -- # xargs 00:30:45.859 23:16:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.859 23:16:18 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:30:45.859 23:16:18 -- host/discovery.sh@83 -- # get_bdev_list 00:30:45.859 23:16:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:45.859 23:16:18 -- host/discovery.sh@55 -- # xargs 00:30:45.859 23:16:18 -- host/discovery.sh@55 -- # sort 00:30:45.859 23:16:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:45.859 23:16:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.859 23:16:18 -- common/autotest_common.sh@10 -- # set +x 00:30:45.859 23:16:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.859 23:16:18 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:45.859 23:16:18 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:45.859 23:16:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.859 23:16:18 -- common/autotest_common.sh@10 -- # set +x 00:30:45.859 23:16:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.859 23:16:18 -- host/discovery.sh@86 -- # get_subsystem_names 00:30:45.859 23:16:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:45.859 23:16:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:45.859 23:16:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.859 23:16:18 -- common/autotest_common.sh@10 -- # set +x 00:30:45.859 23:16:18 -- host/discovery.sh@59 -- # sort 00:30:45.859 23:16:18 -- host/discovery.sh@59 -- # xargs 00:30:45.859 23:16:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.859 23:16:18 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:30:45.859 23:16:18 -- host/discovery.sh@87 -- # get_bdev_list 00:30:45.859 23:16:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:45.859 23:16:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:45.859 23:16:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.859 23:16:18 -- common/autotest_common.sh@10 -- # set +x 00:30:45.859 23:16:18 -- host/discovery.sh@55 -- # sort 00:30:45.859 23:16:18 -- host/discovery.sh@55 -- # xargs 00:30:45.859 23:16:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.859 23:16:18 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:45.859 23:16:18 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:45.859 23:16:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.859 23:16:18 -- common/autotest_common.sh@10 -- # set +x 00:30:45.859 [2024-07-24 23:16:18.282756] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:45.859 23:16:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:46.118 23:16:18 -- host/discovery.sh@92 -- # get_subsystem_names 00:30:46.118 23:16:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:46.118 23:16:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:46.118 23:16:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:46.118 23:16:18 -- common/autotest_common.sh@10 -- # set +x 00:30:46.118 23:16:18 -- host/discovery.sh@59 -- # sort 00:30:46.118 23:16:18 -- host/discovery.sh@59 -- # xargs 00:30:46.118 23:16:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:46.118 23:16:18 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:46.118 23:16:18 -- host/discovery.sh@93 -- # get_bdev_list 00:30:46.118 23:16:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:46.118 23:16:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:46.118 23:16:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:46.118 23:16:18 -- host/discovery.sh@55 -- # sort 00:30:46.118 23:16:18 -- common/autotest_common.sh@10 -- # set +x 00:30:46.118 23:16:18 -- host/discovery.sh@55 -- # xargs 00:30:46.118 23:16:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:46.118 23:16:18 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:30:46.118 23:16:18 -- host/discovery.sh@94 -- # get_notification_count 00:30:46.118 23:16:18 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:46.118 23:16:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:46.118 23:16:18 -- host/discovery.sh@74 -- # jq '. | length' 00:30:46.118 23:16:18 -- common/autotest_common.sh@10 -- # set +x 00:30:46.118 23:16:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:46.118 23:16:18 -- host/discovery.sh@74 -- # notification_count=0 00:30:46.118 23:16:18 -- host/discovery.sh@75 -- # notify_id=0 00:30:46.118 23:16:18 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:30:46.118 23:16:18 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:46.118 23:16:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:46.118 23:16:18 -- common/autotest_common.sh@10 -- # set +x 00:30:46.118 23:16:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:46.118 23:16:18 -- host/discovery.sh@100 -- # sleep 1 00:30:46.686 [2024-07-24 23:16:18.995891] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:46.686 [2024-07-24 23:16:18.995915] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:46.686 [2024-07-24 23:16:18.995930] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:46.686 [2024-07-24 23:16:19.082180] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:46.945 [2024-07-24 23:16:19.307849] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:46.945 [2024-07-24 23:16:19.307870] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:47.205 23:16:19 -- host/discovery.sh@101 -- # get_subsystem_names 00:30:47.205 23:16:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:47.205 23:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:47.205 23:16:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:47.205 23:16:19 -- common/autotest_common.sh@10 -- # set +x 00:30:47.205 23:16:19 -- host/discovery.sh@59 -- # sort 00:30:47.205 23:16:19 -- host/discovery.sh@59 -- # xargs 00:30:47.205 23:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:47.205 23:16:19 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:47.205 23:16:19 -- host/discovery.sh@102 -- # get_bdev_list 00:30:47.205 23:16:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:47.205 23:16:19 -- host/discovery.sh@55 -- # sort 00:30:47.205 23:16:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:47.205 23:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:47.205 23:16:19 -- common/autotest_common.sh@10 -- # set +x 00:30:47.205 23:16:19 -- host/discovery.sh@55 -- # xargs 00:30:47.205 23:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:47.205 23:16:19 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:47.205 23:16:19 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:30:47.205 23:16:19 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:47.205 23:16:19 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:47.205 23:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:47.205 23:16:19 -- common/autotest_common.sh@10 -- # set +x 00:30:47.205 23:16:19 -- host/discovery.sh@63 -- # sort -n 00:30:47.205 23:16:19 -- host/discovery.sh@63 -- # xargs 00:30:47.205 23:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:47.205 23:16:19 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:30:47.205 23:16:19 -- host/discovery.sh@104 -- # get_notification_count 00:30:47.205 23:16:19 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:47.205 23:16:19 -- host/discovery.sh@74 -- # jq '. | length' 00:30:47.205 23:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:47.205 23:16:19 -- common/autotest_common.sh@10 -- # set +x 00:30:47.205 23:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:47.205 23:16:19 -- host/discovery.sh@74 -- # notification_count=1 00:30:47.205 23:16:19 -- host/discovery.sh@75 -- # notify_id=1 00:30:47.205 23:16:19 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:30:47.205 23:16:19 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:47.205 23:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:47.205 23:16:19 -- common/autotest_common.sh@10 -- # set +x 00:30:47.205 23:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:47.464 23:16:19 -- host/discovery.sh@109 -- # sleep 1 00:30:48.401 23:16:20 -- host/discovery.sh@110 -- # get_bdev_list 00:30:48.401 23:16:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.401 23:16:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:48.401 23:16:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:48.401 23:16:20 -- common/autotest_common.sh@10 -- # set +x 00:30:48.401 23:16:20 -- host/discovery.sh@55 -- # sort 00:30:48.401 23:16:20 -- host/discovery.sh@55 -- # xargs 00:30:48.401 23:16:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:48.401 23:16:20 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:48.401 23:16:20 -- host/discovery.sh@111 -- # get_notification_count 00:30:48.401 23:16:20 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:48.401 23:16:20 -- host/discovery.sh@74 -- # jq '. | length' 00:30:48.401 23:16:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:48.401 23:16:20 -- common/autotest_common.sh@10 -- # set +x 00:30:48.401 23:16:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:48.401 23:16:20 -- host/discovery.sh@74 -- # notification_count=1 00:30:48.401 23:16:20 -- host/discovery.sh@75 -- # notify_id=2 00:30:48.401 23:16:20 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:30:48.401 23:16:20 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:48.401 23:16:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:48.401 23:16:20 -- common/autotest_common.sh@10 -- # set +x 00:30:48.401 [2024-07-24 23:16:20.745655] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:48.401 [2024-07-24 23:16:20.746517] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:48.401 [2024-07-24 23:16:20.746543] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:48.401 23:16:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:48.401 23:16:20 -- host/discovery.sh@117 -- # sleep 1 00:30:48.660 [2024-07-24 23:16:20.833768] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:48.660 [2024-07-24 23:16:20.892377] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:48.660 [2024-07-24 23:16:20.892394] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:48.660 [2024-07-24 23:16:20.892401] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:49.597 23:16:21 -- host/discovery.sh@118 -- # get_subsystem_names 00:30:49.597 23:16:21 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:49.597 23:16:21 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:49.597 23:16:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.597 23:16:21 -- common/autotest_common.sh@10 -- # set +x 00:30:49.597 23:16:21 -- host/discovery.sh@59 -- # sort 00:30:49.597 23:16:21 -- host/discovery.sh@59 -- # xargs 00:30:49.597 23:16:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.597 23:16:21 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:49.597 23:16:21 -- host/discovery.sh@119 -- # get_bdev_list 00:30:49.597 23:16:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:49.597 23:16:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.597 23:16:21 -- common/autotest_common.sh@10 -- # set +x 00:30:49.597 23:16:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:49.597 23:16:21 -- host/discovery.sh@55 -- # sort 00:30:49.597 23:16:21 -- host/discovery.sh@55 -- # xargs 00:30:49.597 23:16:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.597 23:16:21 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:49.597 23:16:21 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:30:49.597 23:16:21 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:49.597 23:16:21 -- host/discovery.sh@63 -- # xargs 00:30:49.597 23:16:21 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:49.597 23:16:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.597 23:16:21 -- common/autotest_common.sh@10 -- # set +x 00:30:49.597 23:16:21 -- host/discovery.sh@63 -- # sort -n 00:30:49.597 23:16:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.597 23:16:21 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:49.597 23:16:21 -- host/discovery.sh@121 -- # get_notification_count 00:30:49.597 23:16:21 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:49.597 23:16:21 -- host/discovery.sh@74 -- # jq '. | length' 00:30:49.597 23:16:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.597 23:16:21 -- common/autotest_common.sh@10 -- # set +x 00:30:49.597 23:16:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.597 23:16:21 -- host/discovery.sh@74 -- # notification_count=0 00:30:49.597 23:16:21 -- host/discovery.sh@75 -- # notify_id=2 00:30:49.597 23:16:21 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:30:49.597 23:16:21 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:49.597 23:16:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.597 23:16:21 -- common/autotest_common.sh@10 -- # set +x 00:30:49.597 [2024-07-24 23:16:21.961317] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:49.597 [2024-07-24 23:16:21.961339] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:49.597 [2024-07-24 23:16:21.961995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.597 [2024-07-24 23:16:21.962014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.597 [2024-07-24 23:16:21.962025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.597 [2024-07-24 23:16:21.962034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.598 [2024-07-24 23:16:21.962044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.598 [2024-07-24 23:16:21.962053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.598 [2024-07-24 23:16:21.962062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.598 [2024-07-24 23:16:21.962071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.598 [2024-07-24 23:16:21.962080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c00 is same with the state(5) to be set 00:30:49.598 23:16:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.598 23:16:21 -- host/discovery.sh@127 -- # sleep 1 00:30:49.598 [2024-07-24 23:16:21.972006] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf5c00 (9): Bad file descriptor 00:30:49.598 [2024-07-24 23:16:21.982044] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:49.598 [2024-07-24 23:16:21.982347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-24 23:16:21.982664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-24 23:16:21.982677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf5c00 with addr=10.0.0.2, port=4420 00:30:49.598 [2024-07-24 23:16:21.982687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c00 is same with the state(5) to be set 00:30:49.598 [2024-07-24 23:16:21.982701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf5c00 (9): Bad file descriptor 00:30:49.598 [2024-07-24 23:16:21.982725] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:49.598 [2024-07-24 23:16:21.982735] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:49.598 [2024-07-24 23:16:21.982746] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:49.598 [2024-07-24 23:16:21.982758] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.598 [2024-07-24 23:16:21.992096] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:49.598 [2024-07-24 23:16:21.992422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-24 23:16:21.992721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-24 23:16:21.992733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf5c00 with addr=10.0.0.2, port=4420 00:30:49.598 [2024-07-24 23:16:21.992760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c00 is same with the state(5) to be set 00:30:49.598 [2024-07-24 23:16:21.992773] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf5c00 (9): Bad file descriptor 00:30:49.598 [2024-07-24 23:16:21.992793] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:49.598 [2024-07-24 23:16:21.992803] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:49.598 [2024-07-24 23:16:21.992815] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:49.598 [2024-07-24 23:16:21.992827] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.598 [2024-07-24 23:16:22.002146] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:49.598 [2024-07-24 23:16:22.002470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-24 23:16:22.002783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-24 23:16:22.002796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf5c00 with addr=10.0.0.2, port=4420 00:30:49.598 [2024-07-24 23:16:22.002805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c00 is same with the state(5) to be set 00:30:49.598 [2024-07-24 23:16:22.002819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf5c00 (9): Bad file descriptor 00:30:49.598 [2024-07-24 23:16:22.002837] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:49.598 [2024-07-24 23:16:22.002846] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:49.598 [2024-07-24 23:16:22.002855] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:49.598 [2024-07-24 23:16:22.002866] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.598 [2024-07-24 23:16:22.012197] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:49.598 [2024-07-24 23:16:22.012562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-24 23:16:22.012881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-24 23:16:22.012893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf5c00 with addr=10.0.0.2, port=4420 00:30:49.598 [2024-07-24 23:16:22.012903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c00 is same with the state(5) to be set 00:30:49.598 [2024-07-24 23:16:22.012915] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf5c00 (9): Bad file descriptor 00:30:49.598 [2024-07-24 23:16:22.012935] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:49.598 [2024-07-24 23:16:22.012944] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:49.598 [2024-07-24 23:16:22.012953] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:49.598 [2024-07-24 23:16:22.012964] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.598 [2024-07-24 23:16:22.022250] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:49.598 [2024-07-24 23:16:22.022603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-24 23:16:22.022774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-24 23:16:22.022786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf5c00 with addr=10.0.0.2, port=4420 00:30:49.598 [2024-07-24 23:16:22.022795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c00 is same with the state(5) to be set 00:30:49.598 [2024-07-24 23:16:22.022809] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf5c00 (9): Bad file descriptor 00:30:49.598 [2024-07-24 23:16:22.022821] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:49.598 [2024-07-24 23:16:22.022829] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:49.598 [2024-07-24 23:16:22.022838] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:49.598 [2024-07-24 23:16:22.022852] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.857 [2024-07-24 23:16:22.032300] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:49.857 [2024-07-24 23:16:22.032651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.858 [2024-07-24 23:16:22.032942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.858 [2024-07-24 23:16:22.032954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf5c00 with addr=10.0.0.2, port=4420 00:30:49.858 [2024-07-24 23:16:22.032964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c00 is same with the state(5) to be set 00:30:49.858 [2024-07-24 23:16:22.032976] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf5c00 (9): Bad file descriptor 00:30:49.858 [2024-07-24 23:16:22.032995] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:49.858 [2024-07-24 23:16:22.033003] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:49.858 [2024-07-24 23:16:22.033012] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:49.858 [2024-07-24 23:16:22.033023] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.858 [2024-07-24 23:16:22.042350] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:49.858 [2024-07-24 23:16:22.042691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.858 [2024-07-24 23:16:22.043004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.858 [2024-07-24 23:16:22.043016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf5c00 with addr=10.0.0.2, port=4420 00:30:49.858 [2024-07-24 23:16:22.043025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c00 is same with the state(5) to be set 00:30:49.858 [2024-07-24 23:16:22.043037] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf5c00 (9): Bad file descriptor 00:30:49.858 [2024-07-24 23:16:22.043056] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:49.858 [2024-07-24 23:16:22.043065] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:49.858 [2024-07-24 23:16:22.043073] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:49.858 [2024-07-24 23:16:22.043084] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.858 [2024-07-24 23:16:22.052400] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:49.858 [2024-07-24 23:16:22.052736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.858 [2024-07-24 23:16:22.053069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.858 [2024-07-24 23:16:22.053082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf5c00 with addr=10.0.0.2, port=4420 00:30:49.858 [2024-07-24 23:16:22.053091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c00 is same with the state(5) to be set 00:30:49.858 [2024-07-24 23:16:22.053104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf5c00 (9): Bad file descriptor 00:30:49.858 [2024-07-24 23:16:22.053124] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:49.858 [2024-07-24 23:16:22.053133] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:49.858 [2024-07-24 23:16:22.053142] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:49.858 [2024-07-24 23:16:22.053153] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.858 [2024-07-24 23:16:22.062453] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:49.858 [2024-07-24 23:16:22.062775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.858 [2024-07-24 23:16:22.063067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.858 [2024-07-24 23:16:22.063079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf5c00 with addr=10.0.0.2, port=4420 00:30:49.858 [2024-07-24 23:16:22.063088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c00 is same with the state(5) to be set 00:30:49.858 [2024-07-24 23:16:22.063101] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf5c00 (9): Bad file descriptor 00:30:49.858 [2024-07-24 23:16:22.063113] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:49.858 [2024-07-24 23:16:22.063121] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:49.858 [2024-07-24 23:16:22.063130] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:49.858 [2024-07-24 23:16:22.063140] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.858 [2024-07-24 23:16:22.072503] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:49.858 [2024-07-24 23:16:22.072825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.858 [2024-07-24 23:16:22.073095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.858 [2024-07-24 23:16:22.073107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf5c00 with addr=10.0.0.2, port=4420 00:30:49.858 [2024-07-24 23:16:22.073116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c00 is same with the state(5) to be set 00:30:49.858 [2024-07-24 23:16:22.073128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf5c00 (9): Bad file descriptor 00:30:49.858 [2024-07-24 23:16:22.073139] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:49.858 [2024-07-24 23:16:22.073147] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:49.858 [2024-07-24 23:16:22.073156] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:49.858 [2024-07-24 23:16:22.073167] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.858 [2024-07-24 23:16:22.082551] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:49.858 [2024-07-24 23:16:22.082874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.858 [2024-07-24 23:16:22.083112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.858 [2024-07-24 23:16:22.083124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf5c00 with addr=10.0.0.2, port=4420 00:30:49.858 [2024-07-24 23:16:22.083133] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c00 is same with the state(5) to be set 00:30:49.858 [2024-07-24 23:16:22.083145] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf5c00 (9): Bad file descriptor 00:30:49.858 [2024-07-24 23:16:22.083156] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:49.858 [2024-07-24 23:16:22.083165] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:49.858 [2024-07-24 23:16:22.083173] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:49.858 [2024-07-24 23:16:22.083184] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.858 [2024-07-24 23:16:22.089243] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:49.858 [2024-07-24 23:16:22.089261] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:50.795 23:16:22 -- host/discovery.sh@128 -- # get_subsystem_names 00:30:50.795 23:16:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:50.795 23:16:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:50.795 23:16:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.795 23:16:22 -- common/autotest_common.sh@10 -- # set +x 00:30:50.795 23:16:22 -- host/discovery.sh@59 -- # sort 00:30:50.795 23:16:22 -- host/discovery.sh@59 -- # xargs 00:30:50.795 23:16:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.795 23:16:23 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:50.795 23:16:23 -- host/discovery.sh@129 -- # get_bdev_list 00:30:50.795 23:16:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:50.795 23:16:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:50.795 23:16:23 -- host/discovery.sh@55 -- # sort 00:30:50.795 23:16:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.795 23:16:23 -- host/discovery.sh@55 -- # xargs 00:30:50.795 23:16:23 -- common/autotest_common.sh@10 -- # set +x 00:30:50.795 23:16:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.795 23:16:23 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:50.795 23:16:23 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:30:50.795 23:16:23 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:50.795 23:16:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.795 23:16:23 -- common/autotest_common.sh@10 -- # set +x 00:30:50.795 23:16:23 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:50.795 23:16:23 -- host/discovery.sh@63 -- # sort -n 00:30:50.795 23:16:23 -- host/discovery.sh@63 -- # xargs 00:30:50.795 23:16:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.795 23:16:23 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:30:50.795 23:16:23 -- host/discovery.sh@131 -- # get_notification_count 00:30:50.795 23:16:23 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:50.795 23:16:23 -- host/discovery.sh@74 -- # jq '. | length' 00:30:50.795 23:16:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.795 23:16:23 -- common/autotest_common.sh@10 -- # set +x 00:30:50.795 23:16:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.795 23:16:23 -- host/discovery.sh@74 -- # notification_count=0 00:30:50.795 23:16:23 -- host/discovery.sh@75 -- # notify_id=2 00:30:50.795 23:16:23 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:30:50.795 23:16:23 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:50.795 23:16:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.795 23:16:23 -- common/autotest_common.sh@10 -- # set +x 00:30:50.795 23:16:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.795 23:16:23 -- host/discovery.sh@135 -- # sleep 1 00:30:52.188 23:16:24 -- host/discovery.sh@136 -- # get_subsystem_names 00:30:52.188 23:16:24 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:52.188 23:16:24 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:52.188 23:16:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.188 23:16:24 -- host/discovery.sh@59 -- # sort 00:30:52.188 23:16:24 -- common/autotest_common.sh@10 -- # set +x 00:30:52.188 23:16:24 -- host/discovery.sh@59 -- # xargs 00:30:52.188 23:16:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.188 23:16:24 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:30:52.188 23:16:24 -- host/discovery.sh@137 -- # get_bdev_list 00:30:52.188 23:16:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:52.188 23:16:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:52.188 23:16:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.188 23:16:24 -- common/autotest_common.sh@10 -- # set +x 00:30:52.188 23:16:24 -- host/discovery.sh@55 -- # sort 00:30:52.188 23:16:24 -- host/discovery.sh@55 -- # xargs 00:30:52.188 23:16:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.188 23:16:24 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:30:52.188 23:16:24 -- host/discovery.sh@138 -- # get_notification_count 00:30:52.188 23:16:24 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:52.188 23:16:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.188 23:16:24 -- common/autotest_common.sh@10 -- # set +x 00:30:52.188 23:16:24 -- host/discovery.sh@74 -- # jq '. | length' 00:30:52.188 23:16:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.188 23:16:24 -- host/discovery.sh@74 -- # notification_count=2 00:30:52.188 23:16:24 -- host/discovery.sh@75 -- # notify_id=4 00:30:52.188 23:16:24 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:30:52.188 23:16:24 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:52.188 23:16:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.188 23:16:24 -- common/autotest_common.sh@10 -- # set +x 00:30:53.122 [2024-07-24 23:16:25.361532] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:53.122 [2024-07-24 23:16:25.361550] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:53.122 [2024-07-24 23:16:25.361562] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:53.122 [2024-07-24 23:16:25.449823] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:53.382 [2024-07-24 23:16:25.719955] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:53.382 [2024-07-24 23:16:25.719982] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:53.382 23:16:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.382 23:16:25 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:53.382 23:16:25 -- common/autotest_common.sh@640 -- # local es=0 00:30:53.382 23:16:25 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:53.382 23:16:25 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:30:53.382 23:16:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:53.382 23:16:25 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:30:53.382 23:16:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:53.382 23:16:25 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:53.382 23:16:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.382 23:16:25 -- common/autotest_common.sh@10 -- # set +x 00:30:53.382 request: 00:30:53.382 { 00:30:53.382 "name": "nvme", 00:30:53.382 "trtype": "tcp", 00:30:53.382 "traddr": "10.0.0.2", 00:30:53.382 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:53.382 "adrfam": "ipv4", 00:30:53.382 "trsvcid": "8009", 00:30:53.382 "wait_for_attach": true, 00:30:53.382 "method": "bdev_nvme_start_discovery", 00:30:53.382 "req_id": 1 00:30:53.382 } 00:30:53.382 Got JSON-RPC error response 00:30:53.382 response: 00:30:53.382 { 00:30:53.382 "code": -17, 00:30:53.382 "message": "File exists" 00:30:53.382 } 00:30:53.382 23:16:25 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:30:53.382 23:16:25 -- common/autotest_common.sh@643 -- # es=1 00:30:53.382 23:16:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:53.382 23:16:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:30:53.382 23:16:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:53.382 23:16:25 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:30:53.382 23:16:25 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:53.382 23:16:25 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:53.382 23:16:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.382 23:16:25 -- common/autotest_common.sh@10 -- # set +x 00:30:53.382 23:16:25 -- host/discovery.sh@67 -- # sort 00:30:53.382 23:16:25 -- host/discovery.sh@67 -- # xargs 00:30:53.382 23:16:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.382 23:16:25 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:30:53.382 23:16:25 -- host/discovery.sh@147 -- # get_bdev_list 00:30:53.382 23:16:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:53.382 23:16:25 -- host/discovery.sh@55 -- # xargs 00:30:53.382 23:16:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:53.382 23:16:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.382 23:16:25 -- host/discovery.sh@55 -- # sort 00:30:53.382 23:16:25 -- common/autotest_common.sh@10 -- # set +x 00:30:53.640 23:16:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.640 23:16:25 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:53.640 23:16:25 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:53.640 23:16:25 -- common/autotest_common.sh@640 -- # local es=0 00:30:53.640 23:16:25 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:53.640 23:16:25 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:30:53.640 23:16:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:53.640 23:16:25 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:30:53.640 23:16:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:53.640 23:16:25 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:53.640 23:16:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.640 23:16:25 -- common/autotest_common.sh@10 -- # set +x 00:30:53.640 request: 00:30:53.640 { 00:30:53.640 "name": "nvme_second", 00:30:53.640 "trtype": "tcp", 00:30:53.640 "traddr": "10.0.0.2", 00:30:53.640 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:53.640 "adrfam": "ipv4", 00:30:53.640 "trsvcid": "8009", 00:30:53.640 "wait_for_attach": true, 00:30:53.640 "method": "bdev_nvme_start_discovery", 00:30:53.640 "req_id": 1 00:30:53.640 } 00:30:53.640 Got JSON-RPC error response 00:30:53.640 response: 00:30:53.640 { 00:30:53.640 "code": -17, 00:30:53.640 "message": "File exists" 00:30:53.640 } 00:30:53.640 23:16:25 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:30:53.640 23:16:25 -- common/autotest_common.sh@643 -- # es=1 00:30:53.640 23:16:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:53.640 23:16:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:30:53.641 23:16:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:53.641 23:16:25 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:30:53.641 23:16:25 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:53.641 23:16:25 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:53.641 23:16:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.641 23:16:25 -- common/autotest_common.sh@10 -- # set +x 00:30:53.641 23:16:25 -- host/discovery.sh@67 -- # sort 00:30:53.641 23:16:25 -- host/discovery.sh@67 -- # xargs 00:30:53.641 23:16:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.641 23:16:25 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:30:53.641 23:16:25 -- host/discovery.sh@153 -- # get_bdev_list 00:30:53.641 23:16:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:53.641 23:16:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.641 23:16:25 -- common/autotest_common.sh@10 -- # set +x 00:30:53.641 23:16:25 -- host/discovery.sh@55 -- # xargs 00:30:53.641 23:16:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:53.641 23:16:25 -- host/discovery.sh@55 -- # sort 00:30:53.641 23:16:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.641 23:16:25 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:53.641 23:16:25 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:53.641 23:16:25 -- common/autotest_common.sh@640 -- # local es=0 00:30:53.641 23:16:25 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:53.641 23:16:25 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:30:53.641 23:16:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:53.641 23:16:25 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:30:53.641 23:16:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:53.641 23:16:25 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:53.641 23:16:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.641 23:16:25 -- common/autotest_common.sh@10 -- # set +x 00:30:54.578 [2024-07-24 23:16:26.976995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.578 [2024-07-24 23:16:26.977287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.578 [2024-07-24 23:16:26.977300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf39b0 with addr=10.0.0.2, port=8010 00:30:54.578 [2024-07-24 23:16:26.977318] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:54.578 [2024-07-24 23:16:26.977327] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:54.578 [2024-07-24 23:16:26.977336] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:55.582 [2024-07-24 23:16:27.979393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.582 [2024-07-24 23:16:27.979606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.582 [2024-07-24 23:16:27.979619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf4120 with addr=10.0.0.2, port=8010 00:30:55.582 [2024-07-24 23:16:27.979631] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:55.582 [2024-07-24 23:16:27.979639] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:55.582 [2024-07-24 23:16:27.979647] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:56.958 [2024-07-24 23:16:28.981503] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:56.958 request: 00:30:56.958 { 00:30:56.958 "name": "nvme_second", 00:30:56.958 "trtype": "tcp", 00:30:56.958 "traddr": "10.0.0.2", 00:30:56.958 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:56.958 "adrfam": "ipv4", 00:30:56.958 "trsvcid": "8010", 00:30:56.958 "attach_timeout_ms": 3000, 00:30:56.958 "method": "bdev_nvme_start_discovery", 00:30:56.958 "req_id": 1 00:30:56.958 } 00:30:56.958 Got JSON-RPC error response 00:30:56.958 response: 00:30:56.958 { 00:30:56.958 "code": -110, 00:30:56.958 "message": "Connection timed out" 00:30:56.958 } 00:30:56.958 23:16:28 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:30:56.958 23:16:28 -- common/autotest_common.sh@643 -- # es=1 00:30:56.958 23:16:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:56.958 23:16:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:30:56.958 23:16:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:56.958 23:16:28 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:30:56.958 23:16:28 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:56.958 23:16:28 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:56.958 23:16:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.958 23:16:28 -- host/discovery.sh@67 -- # sort 00:30:56.958 23:16:28 -- common/autotest_common.sh@10 -- # set +x 00:30:56.958 23:16:28 -- host/discovery.sh@67 -- # xargs 00:30:56.958 23:16:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.958 23:16:29 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:30:56.958 23:16:29 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:30:56.958 23:16:29 -- host/discovery.sh@162 -- # kill 3385291 00:30:56.958 23:16:29 -- host/discovery.sh@163 -- # nvmftestfini 00:30:56.958 23:16:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:56.958 23:16:29 -- nvmf/common.sh@116 -- # sync 00:30:56.958 23:16:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:56.958 23:16:29 -- nvmf/common.sh@119 -- # set +e 00:30:56.958 23:16:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:56.958 23:16:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:56.958 rmmod nvme_tcp 00:30:56.958 rmmod nvme_fabrics 00:30:56.958 rmmod nvme_keyring 00:30:56.958 23:16:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:56.958 23:16:29 -- nvmf/common.sh@123 -- # set -e 00:30:56.958 23:16:29 -- nvmf/common.sh@124 -- # return 0 00:30:56.958 23:16:29 -- nvmf/common.sh@477 -- # '[' -n 3385020 ']' 00:30:56.958 23:16:29 -- nvmf/common.sh@478 -- # killprocess 3385020 00:30:56.958 23:16:29 -- common/autotest_common.sh@926 -- # '[' -z 3385020 ']' 00:30:56.958 23:16:29 -- common/autotest_common.sh@930 -- # kill -0 3385020 00:30:56.958 23:16:29 -- common/autotest_common.sh@931 -- # uname 00:30:56.958 23:16:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:56.958 23:16:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3385020 00:30:56.958 23:16:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:56.958 23:16:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:56.958 23:16:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3385020' 00:30:56.958 killing process with pid 3385020 00:30:56.958 23:16:29 -- common/autotest_common.sh@945 -- # kill 3385020 00:30:56.958 23:16:29 -- common/autotest_common.sh@950 -- # wait 3385020 00:30:56.958 23:16:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:56.958 23:16:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:56.958 23:16:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:56.958 23:16:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:56.958 23:16:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:56.958 23:16:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.958 23:16:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:56.958 23:16:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.494 23:16:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:59.494 00:30:59.494 real 0m22.211s 00:30:59.494 user 0m27.984s 00:30:59.494 sys 0m7.299s 00:30:59.494 23:16:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:59.494 23:16:31 -- common/autotest_common.sh@10 -- # set +x 00:30:59.494 ************************************ 00:30:59.494 END TEST nvmf_discovery 00:30:59.494 ************************************ 00:30:59.494 23:16:31 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:59.494 23:16:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:59.494 23:16:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:59.494 23:16:31 -- common/autotest_common.sh@10 -- # set +x 00:30:59.494 ************************************ 00:30:59.494 START TEST nvmf_discovery_remove_ifc 00:30:59.494 ************************************ 00:30:59.494 23:16:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:59.494 * Looking for test storage... 00:30:59.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:59.494 23:16:31 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.494 23:16:31 -- nvmf/common.sh@7 -- # uname -s 00:30:59.494 23:16:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.494 23:16:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.494 23:16:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.494 23:16:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.494 23:16:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.494 23:16:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.494 23:16:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.494 23:16:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.494 23:16:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.494 23:16:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.494 23:16:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:59.494 23:16:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:59.494 23:16:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.494 23:16:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.494 23:16:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.494 23:16:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.494 23:16:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.494 23:16:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.494 23:16:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.494 23:16:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.494 23:16:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.494 23:16:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.494 23:16:31 -- paths/export.sh@5 -- # export PATH 00:30:59.494 23:16:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.494 23:16:31 -- nvmf/common.sh@46 -- # : 0 00:30:59.494 23:16:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:59.494 23:16:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:59.494 23:16:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:59.494 23:16:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.494 23:16:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.494 23:16:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:59.494 23:16:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:59.494 23:16:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:59.494 23:16:31 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:30:59.494 23:16:31 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:30:59.494 23:16:31 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:30:59.494 23:16:31 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:59.494 23:16:31 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:30:59.494 23:16:31 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:30:59.494 23:16:31 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:30:59.494 23:16:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:59.494 23:16:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.494 23:16:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:59.494 23:16:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:59.494 23:16:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:59.494 23:16:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.494 23:16:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:59.494 23:16:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.494 23:16:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:59.494 23:16:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:59.494 23:16:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:59.494 23:16:31 -- common/autotest_common.sh@10 -- # set +x 00:31:06.062 23:16:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:06.062 23:16:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:06.062 23:16:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:06.062 23:16:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:06.062 23:16:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:06.062 23:16:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:06.062 23:16:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:06.062 23:16:38 -- nvmf/common.sh@294 -- # net_devs=() 00:31:06.062 23:16:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:06.062 23:16:38 -- nvmf/common.sh@295 -- # e810=() 00:31:06.062 23:16:38 -- nvmf/common.sh@295 -- # local -ga e810 00:31:06.062 23:16:38 -- nvmf/common.sh@296 -- # x722=() 00:31:06.062 23:16:38 -- nvmf/common.sh@296 -- # local -ga x722 00:31:06.062 23:16:38 -- nvmf/common.sh@297 -- # mlx=() 00:31:06.062 23:16:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:06.062 23:16:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:06.062 23:16:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:06.062 23:16:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:06.062 23:16:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:06.062 23:16:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:06.062 23:16:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:06.062 23:16:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:06.062 23:16:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:06.062 23:16:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:06.062 23:16:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:06.062 23:16:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:06.062 23:16:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:06.062 23:16:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:06.062 23:16:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:06.062 23:16:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:06.062 23:16:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:06.062 23:16:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:06.062 23:16:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:06.062 23:16:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:06.062 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:06.062 23:16:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:06.062 23:16:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:06.062 23:16:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.062 23:16:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.062 23:16:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:06.062 23:16:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:06.062 23:16:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:06.062 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:06.062 23:16:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:06.062 23:16:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:06.062 23:16:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.062 23:16:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.062 23:16:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:06.062 23:16:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:06.062 23:16:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:06.062 23:16:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:06.062 23:16:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:06.062 23:16:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.062 23:16:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:06.062 23:16:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.062 23:16:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:06.062 Found net devices under 0000:af:00.0: cvl_0_0 00:31:06.062 23:16:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.062 23:16:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:06.062 23:16:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.062 23:16:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:06.062 23:16:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.062 23:16:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:06.062 Found net devices under 0000:af:00.1: cvl_0_1 00:31:06.062 23:16:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.062 23:16:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:06.062 23:16:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:06.062 23:16:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:06.062 23:16:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:06.062 23:16:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:06.062 23:16:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:06.062 23:16:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:06.062 23:16:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:06.062 23:16:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:06.062 23:16:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:06.062 23:16:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:06.062 23:16:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:06.062 23:16:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:06.062 23:16:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:06.062 23:16:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:06.062 23:16:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:06.062 23:16:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:06.062 23:16:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:06.062 23:16:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:06.063 23:16:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:06.063 23:16:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:06.063 23:16:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:06.322 23:16:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:06.322 23:16:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:06.322 23:16:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:06.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:06.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:31:06.322 00:31:06.322 --- 10.0.0.2 ping statistics --- 00:31:06.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.322 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:31:06.322 23:16:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:06.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:06.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:31:06.322 00:31:06.322 --- 10.0.0.1 ping statistics --- 00:31:06.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.322 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:31:06.322 23:16:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:06.322 23:16:38 -- nvmf/common.sh@410 -- # return 0 00:31:06.322 23:16:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:06.322 23:16:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:06.322 23:16:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:06.322 23:16:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:06.322 23:16:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:06.322 23:16:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:06.322 23:16:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:06.322 23:16:38 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:06.322 23:16:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:06.322 23:16:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:06.322 23:16:38 -- common/autotest_common.sh@10 -- # set +x 00:31:06.322 23:16:38 -- nvmf/common.sh@469 -- # nvmfpid=3391034 00:31:06.322 23:16:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:06.322 23:16:38 -- nvmf/common.sh@470 -- # waitforlisten 3391034 00:31:06.322 23:16:38 -- common/autotest_common.sh@819 -- # '[' -z 3391034 ']' 00:31:06.322 23:16:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:06.322 23:16:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:06.322 23:16:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:06.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:06.322 23:16:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:06.322 23:16:38 -- common/autotest_common.sh@10 -- # set +x 00:31:06.322 [2024-07-24 23:16:38.658881] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:06.322 [2024-07-24 23:16:38.658931] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:06.322 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.322 [2024-07-24 23:16:38.731699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.581 [2024-07-24 23:16:38.767194] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:06.581 [2024-07-24 23:16:38.767296] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:06.581 [2024-07-24 23:16:38.767305] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:06.581 [2024-07-24 23:16:38.767313] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:06.581 [2024-07-24 23:16:38.767336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:07.148 23:16:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:07.148 23:16:39 -- common/autotest_common.sh@852 -- # return 0 00:31:07.148 23:16:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:07.148 23:16:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:07.148 23:16:39 -- common/autotest_common.sh@10 -- # set +x 00:31:07.148 23:16:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:07.148 23:16:39 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:07.148 23:16:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:07.148 23:16:39 -- common/autotest_common.sh@10 -- # set +x 00:31:07.148 [2024-07-24 23:16:39.508927] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:07.148 [2024-07-24 23:16:39.517101] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:07.148 null0 00:31:07.148 [2024-07-24 23:16:39.549093] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:07.148 23:16:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:07.148 23:16:39 -- host/discovery_remove_ifc.sh@59 -- # hostpid=3391303 00:31:07.148 23:16:39 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:07.148 23:16:39 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3391303 /tmp/host.sock 00:31:07.148 23:16:39 -- common/autotest_common.sh@819 -- # '[' -z 3391303 ']' 00:31:07.148 23:16:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:31:07.148 23:16:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:07.148 23:16:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:07.148 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:07.148 23:16:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:07.148 23:16:39 -- common/autotest_common.sh@10 -- # set +x 00:31:07.406 [2024-07-24 23:16:39.618625] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:07.406 [2024-07-24 23:16:39.618674] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3391303 ] 00:31:07.406 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.406 [2024-07-24 23:16:39.688038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.406 [2024-07-24 23:16:39.725640] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:07.406 [2024-07-24 23:16:39.725794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.341 23:16:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:08.341 23:16:40 -- common/autotest_common.sh@852 -- # return 0 00:31:08.341 23:16:40 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:08.341 23:16:40 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:08.341 23:16:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.341 23:16:40 -- common/autotest_common.sh@10 -- # set +x 00:31:08.341 23:16:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:08.341 23:16:40 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:08.341 23:16:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.341 23:16:40 -- common/autotest_common.sh@10 -- # set +x 00:31:08.341 23:16:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:08.341 23:16:40 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:08.341 23:16:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.341 23:16:40 -- common/autotest_common.sh@10 -- # set +x 00:31:09.274 [2024-07-24 23:16:41.547356] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:09.274 [2024-07-24 23:16:41.547381] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:09.274 [2024-07-24 23:16:41.547394] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:09.274 [2024-07-24 23:16:41.676765] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:09.532 [2024-07-24 23:16:41.863256] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:09.532 [2024-07-24 23:16:41.863294] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:09.532 [2024-07-24 23:16:41.863313] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:09.532 [2024-07-24 23:16:41.863330] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:09.532 [2024-07-24 23:16:41.863351] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:09.532 23:16:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.532 23:16:41 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:09.532 [2024-07-24 23:16:41.866887] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb1e460 was disconnected and freed. delete nvme_qpair. 00:31:09.532 23:16:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:09.532 23:16:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.532 23:16:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:09.532 23:16:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.532 23:16:41 -- common/autotest_common.sh@10 -- # set +x 00:31:09.532 23:16:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:09.532 23:16:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:09.532 23:16:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.533 23:16:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:09.533 23:16:41 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:09.533 23:16:41 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:09.791 23:16:42 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:09.791 23:16:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:09.791 23:16:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.791 23:16:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:09.791 23:16:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.791 23:16:42 -- common/autotest_common.sh@10 -- # set +x 00:31:09.791 23:16:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:09.791 23:16:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:09.791 23:16:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.791 23:16:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:09.791 23:16:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:10.724 23:16:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:10.724 23:16:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.724 23:16:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.724 23:16:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:10.724 23:16:43 -- common/autotest_common.sh@10 -- # set +x 00:31:10.724 23:16:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:10.725 23:16:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:10.725 23:16:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.982 23:16:43 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:10.982 23:16:43 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:11.915 23:16:44 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:11.915 23:16:44 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.915 23:16:44 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:11.915 23:16:44 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:11.915 23:16:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.915 23:16:44 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:11.915 23:16:44 -- common/autotest_common.sh@10 -- # set +x 00:31:11.915 23:16:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.915 23:16:44 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:11.915 23:16:44 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:12.846 23:16:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:12.846 23:16:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:12.846 23:16:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:12.846 23:16:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:12.846 23:16:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.846 23:16:45 -- common/autotest_common.sh@10 -- # set +x 00:31:12.846 23:16:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:12.846 23:16:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.846 23:16:45 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:12.846 23:16:45 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:14.255 23:16:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:14.255 23:16:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:14.255 23:16:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:14.255 23:16:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.255 23:16:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:14.255 23:16:46 -- common/autotest_common.sh@10 -- # set +x 00:31:14.255 23:16:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:14.255 23:16:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.255 23:16:46 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:14.255 23:16:46 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:15.200 [2024-07-24 23:16:47.304397] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:15.200 [2024-07-24 23:16:47.304441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.200 [2024-07-24 23:16:47.304453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.200 [2024-07-24 23:16:47.304466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.200 [2024-07-24 23:16:47.304475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.200 [2024-07-24 23:16:47.304485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.200 [2024-07-24 23:16:47.304495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.200 [2024-07-24 23:16:47.304505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.200 [2024-07-24 23:16:47.304514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.200 [2024-07-24 23:16:47.304523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.201 [2024-07-24 23:16:47.304532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.201 [2024-07-24 23:16:47.304541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae48c0 is same with the state(5) to be set 00:31:15.201 [2024-07-24 23:16:47.314418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae48c0 (9): Bad file descriptor 00:31:15.201 [2024-07-24 23:16:47.324459] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:15.201 23:16:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:15.201 23:16:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:15.201 23:16:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:15.201 23:16:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:15.201 23:16:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.201 23:16:47 -- common/autotest_common.sh@10 -- # set +x 00:31:15.201 23:16:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:16.136 [2024-07-24 23:16:48.343785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:17.069 [2024-07-24 23:16:49.369731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:17.070 [2024-07-24 23:16:49.369779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae48c0 with addr=10.0.0.2, port=4420 00:31:17.070 [2024-07-24 23:16:49.369799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae48c0 is same with the state(5) to be set 00:31:17.070 [2024-07-24 23:16:49.369827] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:17.070 [2024-07-24 23:16:49.369838] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:17.070 [2024-07-24 23:16:49.369846] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:17.070 [2024-07-24 23:16:49.369856] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:17.070 [2024-07-24 23:16:49.370147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae48c0 (9): Bad file descriptor 00:31:17.070 [2024-07-24 23:16:49.370169] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.070 [2024-07-24 23:16:49.370188] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:17.070 [2024-07-24 23:16:49.370211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.070 [2024-07-24 23:16:49.370223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.070 [2024-07-24 23:16:49.370235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.070 [2024-07-24 23:16:49.370245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.070 [2024-07-24 23:16:49.370254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.070 [2024-07-24 23:16:49.370263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.070 [2024-07-24 23:16:49.370272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.070 [2024-07-24 23:16:49.370281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.070 [2024-07-24 23:16:49.370291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.070 [2024-07-24 23:16:49.370300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.070 [2024-07-24 23:16:49.370309] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:17.070 [2024-07-24 23:16:49.370827] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae4cd0 (9): Bad file descriptor 00:31:17.070 [2024-07-24 23:16:49.371840] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:17.070 [2024-07-24 23:16:49.371854] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:17.070 23:16:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.070 23:16:49 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:17.070 23:16:49 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:18.003 23:16:50 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:18.003 23:16:50 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:18.003 23:16:50 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:18.003 23:16:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.003 23:16:50 -- common/autotest_common.sh@10 -- # set +x 00:31:18.003 23:16:50 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:18.003 23:16:50 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:18.003 23:16:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.261 23:16:50 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:18.261 23:16:50 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:18.261 23:16:50 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:18.261 23:16:50 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:18.261 23:16:50 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:18.261 23:16:50 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:18.261 23:16:50 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:18.261 23:16:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.261 23:16:50 -- common/autotest_common.sh@10 -- # set +x 00:31:18.261 23:16:50 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:18.261 23:16:50 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:18.261 23:16:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.261 23:16:50 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:18.261 23:16:50 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:19.196 [2024-07-24 23:16:51.382260] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:19.196 [2024-07-24 23:16:51.382282] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:19.196 [2024-07-24 23:16:51.382296] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:19.196 [2024-07-24 23:16:51.512676] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:19.196 [2024-07-24 23:16:51.614238] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:19.196 [2024-07-24 23:16:51.614271] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:19.196 [2024-07-24 23:16:51.614288] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:19.196 [2024-07-24 23:16:51.614303] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:19.196 [2024-07-24 23:16:51.614311] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:19.196 23:16:51 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:19.196 23:16:51 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:19.196 23:16:51 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:19.196 23:16:51 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:19.196 23:16:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:19.196 23:16:51 -- common/autotest_common.sh@10 -- # set +x 00:31:19.196 23:16:51 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:19.196 [2024-07-24 23:16:51.621819] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xaf2390 was disconnected and freed. delete nvme_qpair. 00:31:19.454 23:16:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:19.454 23:16:51 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:19.454 23:16:51 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:19.454 23:16:51 -- host/discovery_remove_ifc.sh@90 -- # killprocess 3391303 00:31:19.454 23:16:51 -- common/autotest_common.sh@926 -- # '[' -z 3391303 ']' 00:31:19.454 23:16:51 -- common/autotest_common.sh@930 -- # kill -0 3391303 00:31:19.454 23:16:51 -- common/autotest_common.sh@931 -- # uname 00:31:19.454 23:16:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:19.454 23:16:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3391303 00:31:19.454 23:16:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:19.454 23:16:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:19.454 23:16:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3391303' 00:31:19.454 killing process with pid 3391303 00:31:19.454 23:16:51 -- common/autotest_common.sh@945 -- # kill 3391303 00:31:19.454 23:16:51 -- common/autotest_common.sh@950 -- # wait 3391303 00:31:19.454 23:16:51 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:19.454 23:16:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:19.454 23:16:51 -- nvmf/common.sh@116 -- # sync 00:31:19.712 23:16:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:19.712 23:16:51 -- nvmf/common.sh@119 -- # set +e 00:31:19.712 23:16:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:19.712 23:16:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:19.712 rmmod nvme_tcp 00:31:19.712 rmmod nvme_fabrics 00:31:19.712 rmmod nvme_keyring 00:31:19.712 23:16:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:19.712 23:16:51 -- nvmf/common.sh@123 -- # set -e 00:31:19.712 23:16:51 -- nvmf/common.sh@124 -- # return 0 00:31:19.712 23:16:51 -- nvmf/common.sh@477 -- # '[' -n 3391034 ']' 00:31:19.712 23:16:51 -- nvmf/common.sh@478 -- # killprocess 3391034 00:31:19.712 23:16:51 -- common/autotest_common.sh@926 -- # '[' -z 3391034 ']' 00:31:19.712 23:16:51 -- common/autotest_common.sh@930 -- # kill -0 3391034 00:31:19.712 23:16:51 -- common/autotest_common.sh@931 -- # uname 00:31:19.712 23:16:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:19.712 23:16:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3391034 00:31:19.712 23:16:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:19.712 23:16:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:19.712 23:16:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3391034' 00:31:19.712 killing process with pid 3391034 00:31:19.712 23:16:52 -- common/autotest_common.sh@945 -- # kill 3391034 00:31:19.712 23:16:52 -- common/autotest_common.sh@950 -- # wait 3391034 00:31:19.970 23:16:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:19.970 23:16:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:19.970 23:16:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:19.970 23:16:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:19.970 23:16:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:19.970 23:16:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.970 23:16:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:19.970 23:16:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.873 23:16:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:21.873 00:31:21.873 real 0m22.793s 00:31:21.873 user 0m25.416s 00:31:21.873 sys 0m7.442s 00:31:21.873 23:16:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:21.873 23:16:54 -- common/autotest_common.sh@10 -- # set +x 00:31:21.873 ************************************ 00:31:21.873 END TEST nvmf_discovery_remove_ifc 00:31:21.873 ************************************ 00:31:21.873 23:16:54 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:31:21.873 23:16:54 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:21.873 23:16:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:21.873 23:16:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:21.873 23:16:54 -- common/autotest_common.sh@10 -- # set +x 00:31:22.131 ************************************ 00:31:22.131 START TEST nvmf_digest 00:31:22.131 ************************************ 00:31:22.131 23:16:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:22.131 * Looking for test storage... 00:31:22.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:22.131 23:16:54 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.131 23:16:54 -- nvmf/common.sh@7 -- # uname -s 00:31:22.131 23:16:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.131 23:16:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.131 23:16:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.131 23:16:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.131 23:16:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.131 23:16:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.131 23:16:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.131 23:16:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.131 23:16:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.131 23:16:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.131 23:16:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:31:22.131 23:16:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:31:22.131 23:16:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.131 23:16:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.131 23:16:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.131 23:16:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.131 23:16:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.131 23:16:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.131 23:16:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.131 23:16:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.131 23:16:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.131 23:16:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.131 23:16:54 -- paths/export.sh@5 -- # export PATH 00:31:22.131 23:16:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.131 23:16:54 -- nvmf/common.sh@46 -- # : 0 00:31:22.131 23:16:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:22.131 23:16:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:22.131 23:16:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:22.131 23:16:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.131 23:16:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.131 23:16:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:22.131 23:16:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:22.131 23:16:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:22.131 23:16:54 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:22.131 23:16:54 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:31:22.131 23:16:54 -- host/digest.sh@16 -- # runtime=2 00:31:22.131 23:16:54 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:31:22.131 23:16:54 -- host/digest.sh@132 -- # nvmftestinit 00:31:22.131 23:16:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:22.131 23:16:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.131 23:16:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:22.131 23:16:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:22.131 23:16:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:22.131 23:16:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.131 23:16:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:22.131 23:16:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.131 23:16:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:22.131 23:16:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:22.131 23:16:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:22.131 23:16:54 -- common/autotest_common.sh@10 -- # set +x 00:31:28.689 23:17:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:28.689 23:17:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:28.689 23:17:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:28.689 23:17:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:28.689 23:17:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:28.689 23:17:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:28.689 23:17:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:28.689 23:17:00 -- nvmf/common.sh@294 -- # net_devs=() 00:31:28.689 23:17:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:28.689 23:17:00 -- nvmf/common.sh@295 -- # e810=() 00:31:28.689 23:17:00 -- nvmf/common.sh@295 -- # local -ga e810 00:31:28.689 23:17:00 -- nvmf/common.sh@296 -- # x722=() 00:31:28.689 23:17:00 -- nvmf/common.sh@296 -- # local -ga x722 00:31:28.689 23:17:00 -- nvmf/common.sh@297 -- # mlx=() 00:31:28.689 23:17:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:28.689 23:17:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:28.689 23:17:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:28.689 23:17:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:28.689 23:17:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:28.689 23:17:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:28.689 23:17:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:28.689 23:17:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:28.689 23:17:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:28.689 23:17:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:28.689 23:17:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:28.689 23:17:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:28.689 23:17:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:28.689 23:17:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:28.689 23:17:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:28.689 23:17:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:28.689 23:17:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:28.689 23:17:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:28.689 23:17:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:28.689 23:17:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:28.689 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:28.689 23:17:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:28.689 23:17:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:28.689 23:17:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.689 23:17:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.689 23:17:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:28.689 23:17:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:28.689 23:17:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:28.689 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:28.689 23:17:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:28.689 23:17:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:28.689 23:17:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.689 23:17:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.689 23:17:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:28.689 23:17:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:28.689 23:17:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:28.689 23:17:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:28.689 23:17:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:28.689 23:17:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.689 23:17:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:28.689 23:17:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.689 23:17:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:28.689 Found net devices under 0000:af:00.0: cvl_0_0 00:31:28.689 23:17:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.689 23:17:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:28.689 23:17:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.689 23:17:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:28.689 23:17:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.689 23:17:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:28.689 Found net devices under 0000:af:00.1: cvl_0_1 00:31:28.689 23:17:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.689 23:17:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:28.689 23:17:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:28.689 23:17:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:28.689 23:17:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:28.689 23:17:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:28.689 23:17:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:28.689 23:17:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:28.689 23:17:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:28.689 23:17:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:28.689 23:17:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:28.689 23:17:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:28.689 23:17:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:28.689 23:17:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:28.689 23:17:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:28.689 23:17:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:28.689 23:17:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:28.689 23:17:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:28.689 23:17:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:28.689 23:17:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:28.689 23:17:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:28.689 23:17:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:28.689 23:17:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:28.689 23:17:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:28.689 23:17:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:28.689 23:17:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:28.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:31:28.689 00:31:28.689 --- 10.0.0.2 ping statistics --- 00:31:28.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.689 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:31:28.689 23:17:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:28.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:31:28.689 00:31:28.689 --- 10.0.0.1 ping statistics --- 00:31:28.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.689 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:31:28.689 23:17:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.689 23:17:01 -- nvmf/common.sh@410 -- # return 0 00:31:28.689 23:17:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:28.689 23:17:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.689 23:17:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:28.689 23:17:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:28.689 23:17:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.689 23:17:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:28.689 23:17:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:28.947 23:17:01 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:28.947 23:17:01 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:31:28.947 23:17:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:28.947 23:17:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:28.947 23:17:01 -- common/autotest_common.sh@10 -- # set +x 00:31:28.947 ************************************ 00:31:28.947 START TEST nvmf_digest_clean 00:31:28.947 ************************************ 00:31:28.947 23:17:01 -- common/autotest_common.sh@1104 -- # run_digest 00:31:28.947 23:17:01 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:31:28.947 23:17:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:28.947 23:17:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:28.947 23:17:01 -- common/autotest_common.sh@10 -- # set +x 00:31:28.947 23:17:01 -- nvmf/common.sh@469 -- # nvmfpid=3396964 00:31:28.947 23:17:01 -- nvmf/common.sh@470 -- # waitforlisten 3396964 00:31:28.947 23:17:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:28.947 23:17:01 -- common/autotest_common.sh@819 -- # '[' -z 3396964 ']' 00:31:28.947 23:17:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.947 23:17:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:28.947 23:17:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.948 23:17:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:28.948 23:17:01 -- common/autotest_common.sh@10 -- # set +x 00:31:28.948 [2024-07-24 23:17:01.196322] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:28.948 [2024-07-24 23:17:01.196377] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:28.948 EAL: No free 2048 kB hugepages reported on node 1 00:31:28.948 [2024-07-24 23:17:01.272510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.948 [2024-07-24 23:17:01.309950] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:28.948 [2024-07-24 23:17:01.310055] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:28.948 [2024-07-24 23:17:01.310065] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:28.948 [2024-07-24 23:17:01.310074] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:28.948 [2024-07-24 23:17:01.310093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.880 23:17:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:29.880 23:17:01 -- common/autotest_common.sh@852 -- # return 0 00:31:29.880 23:17:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:29.880 23:17:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:29.880 23:17:01 -- common/autotest_common.sh@10 -- # set +x 00:31:29.880 23:17:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:29.880 23:17:02 -- host/digest.sh@120 -- # common_target_config 00:31:29.880 23:17:02 -- host/digest.sh@43 -- # rpc_cmd 00:31:29.880 23:17:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:29.880 23:17:02 -- common/autotest_common.sh@10 -- # set +x 00:31:29.880 null0 00:31:29.880 [2024-07-24 23:17:02.115478] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:29.880 [2024-07-24 23:17:02.139670] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:29.880 23:17:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:29.880 23:17:02 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:31:29.880 23:17:02 -- host/digest.sh@77 -- # local rw bs qd 00:31:29.880 23:17:02 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:29.880 23:17:02 -- host/digest.sh@80 -- # rw=randread 00:31:29.880 23:17:02 -- host/digest.sh@80 -- # bs=4096 00:31:29.880 23:17:02 -- host/digest.sh@80 -- # qd=128 00:31:29.880 23:17:02 -- host/digest.sh@82 -- # bperfpid=3397198 00:31:29.880 23:17:02 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:29.880 23:17:02 -- host/digest.sh@83 -- # waitforlisten 3397198 /var/tmp/bperf.sock 00:31:29.880 23:17:02 -- common/autotest_common.sh@819 -- # '[' -z 3397198 ']' 00:31:29.880 23:17:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:29.880 23:17:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:29.880 23:17:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:29.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:29.881 23:17:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:29.881 23:17:02 -- common/autotest_common.sh@10 -- # set +x 00:31:29.881 [2024-07-24 23:17:02.186231] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:29.881 [2024-07-24 23:17:02.186277] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3397198 ] 00:31:29.881 EAL: No free 2048 kB hugepages reported on node 1 00:31:29.881 [2024-07-24 23:17:02.257846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.881 [2024-07-24 23:17:02.294697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.138 23:17:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:30.138 23:17:02 -- common/autotest_common.sh@852 -- # return 0 00:31:30.139 23:17:02 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:30.139 23:17:02 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:30.139 23:17:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:30.139 23:17:02 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:30.139 23:17:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:30.396 nvme0n1 00:31:30.396 23:17:02 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:30.396 23:17:02 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:30.654 Running I/O for 2 seconds... 00:31:32.553 00:31:32.553 Latency(us) 00:31:32.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:32.553 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:32.553 nvme0n1 : 2.00 30810.72 120.35 0.00 0.00 4150.19 2175.80 16462.64 00:31:32.553 =================================================================================================================== 00:31:32.553 Total : 30810.72 120.35 0.00 0.00 4150.19 2175.80 16462.64 00:31:32.553 0 00:31:32.553 23:17:04 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:32.553 23:17:04 -- host/digest.sh@92 -- # get_accel_stats 00:31:32.553 23:17:04 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:32.553 23:17:04 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:32.553 | select(.opcode=="crc32c") 00:31:32.553 | "\(.module_name) \(.executed)"' 00:31:32.553 23:17:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:32.811 23:17:05 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:32.811 23:17:05 -- host/digest.sh@93 -- # exp_module=software 00:31:32.811 23:17:05 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:32.811 23:17:05 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:32.811 23:17:05 -- host/digest.sh@97 -- # killprocess 3397198 00:31:32.811 23:17:05 -- common/autotest_common.sh@926 -- # '[' -z 3397198 ']' 00:31:32.811 23:17:05 -- common/autotest_common.sh@930 -- # kill -0 3397198 00:31:32.811 23:17:05 -- common/autotest_common.sh@931 -- # uname 00:31:32.811 23:17:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:32.811 23:17:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3397198 00:31:32.811 23:17:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:32.811 23:17:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:32.811 23:17:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3397198' 00:31:32.811 killing process with pid 3397198 00:31:32.811 23:17:05 -- common/autotest_common.sh@945 -- # kill 3397198 00:31:32.811 Received shutdown signal, test time was about 2.000000 seconds 00:31:32.811 00:31:32.811 Latency(us) 00:31:32.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:32.811 =================================================================================================================== 00:31:32.811 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:32.811 23:17:05 -- common/autotest_common.sh@950 -- # wait 3397198 00:31:33.112 23:17:05 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:31:33.112 23:17:05 -- host/digest.sh@77 -- # local rw bs qd 00:31:33.112 23:17:05 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:33.112 23:17:05 -- host/digest.sh@80 -- # rw=randread 00:31:33.112 23:17:05 -- host/digest.sh@80 -- # bs=131072 00:31:33.112 23:17:05 -- host/digest.sh@80 -- # qd=16 00:31:33.112 23:17:05 -- host/digest.sh@82 -- # bperfpid=3397698 00:31:33.112 23:17:05 -- host/digest.sh@83 -- # waitforlisten 3397698 /var/tmp/bperf.sock 00:31:33.112 23:17:05 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:33.112 23:17:05 -- common/autotest_common.sh@819 -- # '[' -z 3397698 ']' 00:31:33.112 23:17:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:33.112 23:17:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:33.112 23:17:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:33.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:33.112 23:17:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:33.112 23:17:05 -- common/autotest_common.sh@10 -- # set +x 00:31:33.112 [2024-07-24 23:17:05.346857] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:33.112 [2024-07-24 23:17:05.346910] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3397698 ] 00:31:33.112 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:33.112 Zero copy mechanism will not be used. 00:31:33.112 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.112 [2024-07-24 23:17:05.418011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.112 [2024-07-24 23:17:05.455452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:34.079 23:17:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:34.079 23:17:06 -- common/autotest_common.sh@852 -- # return 0 00:31:34.079 23:17:06 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:34.079 23:17:06 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:34.079 23:17:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:34.079 23:17:06 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:34.079 23:17:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:34.337 nvme0n1 00:31:34.337 23:17:06 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:34.337 23:17:06 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:34.337 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:34.337 Zero copy mechanism will not be used. 00:31:34.337 Running I/O for 2 seconds... 00:31:36.866 00:31:36.866 Latency(us) 00:31:36.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:36.866 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:36.866 nvme0n1 : 2.00 4182.95 522.87 0.00 0.00 3822.41 865.08 11219.76 00:31:36.866 =================================================================================================================== 00:31:36.866 Total : 4182.95 522.87 0.00 0.00 3822.41 865.08 11219.76 00:31:36.866 0 00:31:36.866 23:17:08 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:36.866 23:17:08 -- host/digest.sh@92 -- # get_accel_stats 00:31:36.866 23:17:08 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:36.866 23:17:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:36.866 23:17:08 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:36.866 | select(.opcode=="crc32c") 00:31:36.866 | "\(.module_name) \(.executed)"' 00:31:36.866 23:17:08 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:36.866 23:17:08 -- host/digest.sh@93 -- # exp_module=software 00:31:36.866 23:17:08 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:36.866 23:17:08 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:36.866 23:17:08 -- host/digest.sh@97 -- # killprocess 3397698 00:31:36.866 23:17:08 -- common/autotest_common.sh@926 -- # '[' -z 3397698 ']' 00:31:36.866 23:17:08 -- common/autotest_common.sh@930 -- # kill -0 3397698 00:31:36.866 23:17:08 -- common/autotest_common.sh@931 -- # uname 00:31:36.866 23:17:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:36.866 23:17:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3397698 00:31:36.866 23:17:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:36.866 23:17:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:36.866 23:17:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3397698' 00:31:36.866 killing process with pid 3397698 00:31:36.866 23:17:08 -- common/autotest_common.sh@945 -- # kill 3397698 00:31:36.866 Received shutdown signal, test time was about 2.000000 seconds 00:31:36.866 00:31:36.866 Latency(us) 00:31:36.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:36.867 =================================================================================================================== 00:31:36.867 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:36.867 23:17:08 -- common/autotest_common.sh@950 -- # wait 3397698 00:31:36.867 23:17:09 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:31:36.867 23:17:09 -- host/digest.sh@77 -- # local rw bs qd 00:31:36.867 23:17:09 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:36.867 23:17:09 -- host/digest.sh@80 -- # rw=randwrite 00:31:36.867 23:17:09 -- host/digest.sh@80 -- # bs=4096 00:31:36.867 23:17:09 -- host/digest.sh@80 -- # qd=128 00:31:36.867 23:17:09 -- host/digest.sh@82 -- # bperfpid=3398340 00:31:36.867 23:17:09 -- host/digest.sh@83 -- # waitforlisten 3398340 /var/tmp/bperf.sock 00:31:36.867 23:17:09 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:36.867 23:17:09 -- common/autotest_common.sh@819 -- # '[' -z 3398340 ']' 00:31:36.867 23:17:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:36.867 23:17:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:36.867 23:17:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:36.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:36.867 23:17:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:36.867 23:17:09 -- common/autotest_common.sh@10 -- # set +x 00:31:36.867 [2024-07-24 23:17:09.173492] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:36.867 [2024-07-24 23:17:09.173547] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398340 ] 00:31:36.867 EAL: No free 2048 kB hugepages reported on node 1 00:31:36.867 [2024-07-24 23:17:09.244263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.867 [2024-07-24 23:17:09.279056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.801 23:17:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:37.801 23:17:09 -- common/autotest_common.sh@852 -- # return 0 00:31:37.801 23:17:09 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:37.801 23:17:09 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:37.801 23:17:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:37.801 23:17:10 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:37.801 23:17:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:38.061 nvme0n1 00:31:38.061 23:17:10 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:38.061 23:17:10 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:38.320 Running I/O for 2 seconds... 00:31:40.224 00:31:40.224 Latency(us) 00:31:40.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:40.224 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:40.224 nvme0n1 : 2.00 29084.75 113.61 0.00 0.00 4394.07 3538.94 13421.77 00:31:40.224 =================================================================================================================== 00:31:40.224 Total : 29084.75 113.61 0.00 0.00 4394.07 3538.94 13421.77 00:31:40.224 0 00:31:40.224 23:17:12 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:40.224 23:17:12 -- host/digest.sh@92 -- # get_accel_stats 00:31:40.224 23:17:12 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:40.224 23:17:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:40.224 23:17:12 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:40.224 | select(.opcode=="crc32c") 00:31:40.224 | "\(.module_name) \(.executed)"' 00:31:40.482 23:17:12 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:40.482 23:17:12 -- host/digest.sh@93 -- # exp_module=software 00:31:40.482 23:17:12 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:40.482 23:17:12 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:40.482 23:17:12 -- host/digest.sh@97 -- # killprocess 3398340 00:31:40.482 23:17:12 -- common/autotest_common.sh@926 -- # '[' -z 3398340 ']' 00:31:40.482 23:17:12 -- common/autotest_common.sh@930 -- # kill -0 3398340 00:31:40.482 23:17:12 -- common/autotest_common.sh@931 -- # uname 00:31:40.482 23:17:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:40.482 23:17:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3398340 00:31:40.482 23:17:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:40.482 23:17:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:40.482 23:17:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3398340' 00:31:40.482 killing process with pid 3398340 00:31:40.482 23:17:12 -- common/autotest_common.sh@945 -- # kill 3398340 00:31:40.482 Received shutdown signal, test time was about 2.000000 seconds 00:31:40.482 00:31:40.482 Latency(us) 00:31:40.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:40.482 =================================================================================================================== 00:31:40.482 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:40.482 23:17:12 -- common/autotest_common.sh@950 -- # wait 3398340 00:31:40.741 23:17:12 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:31:40.741 23:17:12 -- host/digest.sh@77 -- # local rw bs qd 00:31:40.741 23:17:12 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:40.741 23:17:12 -- host/digest.sh@80 -- # rw=randwrite 00:31:40.741 23:17:12 -- host/digest.sh@80 -- # bs=131072 00:31:40.741 23:17:12 -- host/digest.sh@80 -- # qd=16 00:31:40.741 23:17:12 -- host/digest.sh@82 -- # bperfpid=3398895 00:31:40.741 23:17:12 -- host/digest.sh@83 -- # waitforlisten 3398895 /var/tmp/bperf.sock 00:31:40.741 23:17:12 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:40.741 23:17:12 -- common/autotest_common.sh@819 -- # '[' -z 3398895 ']' 00:31:40.741 23:17:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:40.741 23:17:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:40.741 23:17:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:40.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:40.741 23:17:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:40.741 23:17:12 -- common/autotest_common.sh@10 -- # set +x 00:31:40.741 [2024-07-24 23:17:13.002013] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:40.741 [2024-07-24 23:17:13.002068] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398895 ] 00:31:40.741 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:40.741 Zero copy mechanism will not be used. 00:31:40.741 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.741 [2024-07-24 23:17:13.075920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.741 [2024-07-24 23:17:13.108630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.741 23:17:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:40.741 23:17:13 -- common/autotest_common.sh@852 -- # return 0 00:31:40.741 23:17:13 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:40.741 23:17:13 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:40.741 23:17:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:41.000 23:17:13 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:41.000 23:17:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:41.257 nvme0n1 00:31:41.257 23:17:13 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:41.257 23:17:13 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:41.515 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:41.515 Zero copy mechanism will not be used. 00:31:41.515 Running I/O for 2 seconds... 00:31:43.416 00:31:43.416 Latency(us) 00:31:43.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.416 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:43.416 nvme0n1 : 2.00 4946.65 618.33 0.00 0.00 3228.47 2070.94 22439.53 00:31:43.416 =================================================================================================================== 00:31:43.416 Total : 4946.65 618.33 0.00 0.00 3228.47 2070.94 22439.53 00:31:43.416 0 00:31:43.416 23:17:15 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:43.416 23:17:15 -- host/digest.sh@92 -- # get_accel_stats 00:31:43.416 23:17:15 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:43.416 23:17:15 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:43.416 | select(.opcode=="crc32c") 00:31:43.416 | "\(.module_name) \(.executed)"' 00:31:43.416 23:17:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:43.674 23:17:15 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:43.674 23:17:15 -- host/digest.sh@93 -- # exp_module=software 00:31:43.674 23:17:15 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:43.674 23:17:15 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:43.674 23:17:15 -- host/digest.sh@97 -- # killprocess 3398895 00:31:43.674 23:17:15 -- common/autotest_common.sh@926 -- # '[' -z 3398895 ']' 00:31:43.674 23:17:15 -- common/autotest_common.sh@930 -- # kill -0 3398895 00:31:43.674 23:17:15 -- common/autotest_common.sh@931 -- # uname 00:31:43.674 23:17:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:43.674 23:17:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3398895 00:31:43.674 23:17:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:43.674 23:17:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:43.674 23:17:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3398895' 00:31:43.674 killing process with pid 3398895 00:31:43.674 23:17:15 -- common/autotest_common.sh@945 -- # kill 3398895 00:31:43.674 Received shutdown signal, test time was about 2.000000 seconds 00:31:43.674 00:31:43.674 Latency(us) 00:31:43.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.674 =================================================================================================================== 00:31:43.674 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:43.674 23:17:15 -- common/autotest_common.sh@950 -- # wait 3398895 00:31:43.933 23:17:16 -- host/digest.sh@126 -- # killprocess 3396964 00:31:43.933 23:17:16 -- common/autotest_common.sh@926 -- # '[' -z 3396964 ']' 00:31:43.933 23:17:16 -- common/autotest_common.sh@930 -- # kill -0 3396964 00:31:43.933 23:17:16 -- common/autotest_common.sh@931 -- # uname 00:31:43.933 23:17:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:43.933 23:17:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3396964 00:31:43.933 23:17:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:43.933 23:17:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:43.933 23:17:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3396964' 00:31:43.933 killing process with pid 3396964 00:31:43.933 23:17:16 -- common/autotest_common.sh@945 -- # kill 3396964 00:31:43.933 23:17:16 -- common/autotest_common.sh@950 -- # wait 3396964 00:31:43.933 00:31:43.933 real 0m15.215s 00:31:43.933 user 0m27.992s 00:31:43.933 sys 0m5.039s 00:31:43.933 23:17:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:43.933 23:17:16 -- common/autotest_common.sh@10 -- # set +x 00:31:43.933 ************************************ 00:31:43.933 END TEST nvmf_digest_clean 00:31:43.933 ************************************ 00:31:44.191 23:17:16 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:31:44.191 23:17:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:44.191 23:17:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:44.191 23:17:16 -- common/autotest_common.sh@10 -- # set +x 00:31:44.191 ************************************ 00:31:44.191 START TEST nvmf_digest_error 00:31:44.191 ************************************ 00:31:44.191 23:17:16 -- common/autotest_common.sh@1104 -- # run_digest_error 00:31:44.191 23:17:16 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:31:44.191 23:17:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:44.191 23:17:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:44.191 23:17:16 -- common/autotest_common.sh@10 -- # set +x 00:31:44.191 23:17:16 -- nvmf/common.sh@469 -- # nvmfpid=3399534 00:31:44.191 23:17:16 -- nvmf/common.sh@470 -- # waitforlisten 3399534 00:31:44.191 23:17:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:44.191 23:17:16 -- common/autotest_common.sh@819 -- # '[' -z 3399534 ']' 00:31:44.191 23:17:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.191 23:17:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:44.191 23:17:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.191 23:17:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:44.191 23:17:16 -- common/autotest_common.sh@10 -- # set +x 00:31:44.191 [2024-07-24 23:17:16.462588] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:44.191 [2024-07-24 23:17:16.462639] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:44.191 EAL: No free 2048 kB hugepages reported on node 1 00:31:44.191 [2024-07-24 23:17:16.537028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.191 [2024-07-24 23:17:16.574365] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:44.191 [2024-07-24 23:17:16.574468] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:44.191 [2024-07-24 23:17:16.574477] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:44.191 [2024-07-24 23:17:16.574486] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:44.191 [2024-07-24 23:17:16.574503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.126 23:17:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:45.126 23:17:17 -- common/autotest_common.sh@852 -- # return 0 00:31:45.126 23:17:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:45.126 23:17:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:45.126 23:17:17 -- common/autotest_common.sh@10 -- # set +x 00:31:45.126 23:17:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:45.126 23:17:17 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:31:45.126 23:17:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.126 23:17:17 -- common/autotest_common.sh@10 -- # set +x 00:31:45.126 [2024-07-24 23:17:17.296612] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:31:45.126 23:17:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.126 23:17:17 -- host/digest.sh@104 -- # common_target_config 00:31:45.126 23:17:17 -- host/digest.sh@43 -- # rpc_cmd 00:31:45.126 23:17:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.126 23:17:17 -- common/autotest_common.sh@10 -- # set +x 00:31:45.126 null0 00:31:45.126 [2024-07-24 23:17:17.384709] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:45.126 [2024-07-24 23:17:17.408905] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:45.126 23:17:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.126 23:17:17 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:31:45.126 23:17:17 -- host/digest.sh@54 -- # local rw bs qd 00:31:45.126 23:17:17 -- host/digest.sh@56 -- # rw=randread 00:31:45.126 23:17:17 -- host/digest.sh@56 -- # bs=4096 00:31:45.126 23:17:17 -- host/digest.sh@56 -- # qd=128 00:31:45.126 23:17:17 -- host/digest.sh@58 -- # bperfpid=3399744 00:31:45.126 23:17:17 -- host/digest.sh@60 -- # waitforlisten 3399744 /var/tmp/bperf.sock 00:31:45.126 23:17:17 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:31:45.126 23:17:17 -- common/autotest_common.sh@819 -- # '[' -z 3399744 ']' 00:31:45.126 23:17:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:45.126 23:17:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:45.126 23:17:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:45.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:45.126 23:17:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:45.126 23:17:17 -- common/autotest_common.sh@10 -- # set +x 00:31:45.126 [2024-07-24 23:17:17.463183] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:45.126 [2024-07-24 23:17:17.463234] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3399744 ] 00:31:45.126 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.126 [2024-07-24 23:17:17.533866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.384 [2024-07-24 23:17:17.571647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:45.950 23:17:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:45.950 23:17:18 -- common/autotest_common.sh@852 -- # return 0 00:31:45.950 23:17:18 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:45.950 23:17:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:46.207 23:17:18 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:46.207 23:17:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:46.207 23:17:18 -- common/autotest_common.sh@10 -- # set +x 00:31:46.207 23:17:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:46.207 23:17:18 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:46.207 23:17:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:46.207 nvme0n1 00:31:46.207 23:17:18 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:46.207 23:17:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:46.207 23:17:18 -- common/autotest_common.sh@10 -- # set +x 00:31:46.465 23:17:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:46.465 23:17:18 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:46.465 23:17:18 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:46.465 Running I/O for 2 seconds... 00:31:46.465 [2024-07-24 23:17:18.742602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.465 [2024-07-24 23:17:18.742637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.465 [2024-07-24 23:17:18.742651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.465 [2024-07-24 23:17:18.753191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.465 [2024-07-24 23:17:18.753220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.465 [2024-07-24 23:17:18.753233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.465 [2024-07-24 23:17:18.760669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.465 [2024-07-24 23:17:18.760695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.465 [2024-07-24 23:17:18.760708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.465 [2024-07-24 23:17:18.770936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.465 [2024-07-24 23:17:18.770960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.465 [2024-07-24 23:17:18.770971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.465 [2024-07-24 23:17:18.778270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.465 [2024-07-24 23:17:18.778293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.465 [2024-07-24 23:17:18.778304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.465 [2024-07-24 23:17:18.787701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.465 [2024-07-24 23:17:18.787729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.465 [2024-07-24 23:17:18.787740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.465 [2024-07-24 23:17:18.794756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.465 [2024-07-24 23:17:18.794778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.465 [2024-07-24 23:17:18.794789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.465 [2024-07-24 23:17:18.803705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.465 [2024-07-24 23:17:18.803732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.465 [2024-07-24 23:17:18.803743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.465 [2024-07-24 23:17:18.811877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.465 [2024-07-24 23:17:18.811899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.465 [2024-07-24 23:17:18.811913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.465 [2024-07-24 23:17:18.820591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.465 [2024-07-24 23:17:18.820611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.465 [2024-07-24 23:17:18.820622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.465 [2024-07-24 23:17:18.828587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.465 [2024-07-24 23:17:18.828607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.465 [2024-07-24 23:17:18.828618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.465 [2024-07-24 23:17:18.836689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.465 [2024-07-24 23:17:18.836710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.465 [2024-07-24 23:17:18.836727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.465 [2024-07-24 23:17:18.845328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.465 [2024-07-24 23:17:18.845349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.465 [2024-07-24 23:17:18.845360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.465 [2024-07-24 23:17:18.853498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.465 [2024-07-24 23:17:18.853518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.465 [2024-07-24 23:17:18.853529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.465 [2024-07-24 23:17:18.861540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.465 [2024-07-24 23:17:18.861560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.465 [2024-07-24 23:17:18.861571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.465 [2024-07-24 23:17:18.870163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.465 [2024-07-24 23:17:18.870185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.465 [2024-07-24 23:17:18.870197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.465 [2024-07-24 23:17:18.878209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.466 [2024-07-24 23:17:18.878231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.466 [2024-07-24 23:17:18.878242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.466 [2024-07-24 23:17:18.886242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.466 [2024-07-24 23:17:18.886266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.466 [2024-07-24 23:17:18.886277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.466 [2024-07-24 23:17:18.894330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.466 [2024-07-24 23:17:18.894352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.466 [2024-07-24 23:17:18.894363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.724 [2024-07-24 23:17:18.903266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.724 [2024-07-24 23:17:18.903288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.724 [2024-07-24 23:17:18.903299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.724 [2024-07-24 23:17:18.911207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.724 [2024-07-24 23:17:18.911228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.724 [2024-07-24 23:17:18.911239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.724 [2024-07-24 23:17:18.919168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.724 [2024-07-24 23:17:18.919188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.724 [2024-07-24 23:17:18.919199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.724 [2024-07-24 23:17:18.927861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.724 [2024-07-24 23:17:18.927882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.724 [2024-07-24 23:17:18.927893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.724 [2024-07-24 23:17:18.935960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.724 [2024-07-24 23:17:18.935981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.724 [2024-07-24 23:17:18.935992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.724 [2024-07-24 23:17:18.944086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.724 [2024-07-24 23:17:18.944106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.724 [2024-07-24 23:17:18.944117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.724 [2024-07-24 23:17:18.951888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.724 [2024-07-24 23:17:18.951909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.724 [2024-07-24 23:17:18.951919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.724 [2024-07-24 23:17:18.960491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.724 [2024-07-24 23:17:18.960511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.724 [2024-07-24 23:17:18.960522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.724 [2024-07-24 23:17:18.968600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.724 [2024-07-24 23:17:18.968621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:18.968631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:18.977202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:18.977224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:18.977235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:18.985141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:18.985162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:18.985172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:18.993530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:18.993551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:18.993561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:19.001514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:19.001536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:19.001546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:19.010542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:19.010562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:19.010573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:19.018544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:19.018565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:19.018577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:19.026703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:19.026729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:19.026743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:19.035295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:19.035315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:19.035326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:19.043190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:19.043211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:19.043221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:19.051403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:19.051423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:19.051434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:19.059791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:19.059811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:19.059821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:19.067846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:19.067867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:19.067877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:19.075784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:19.075805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:19.075816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:19.084551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:19.084571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:19.084582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:19.092443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:19.092463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:19.092473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:19.100573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:19.100596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:19.100607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:19.108967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:19.108987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:19.108997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:19.117018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:19.117038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:19.117048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:19.125198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:19.125219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:19.125229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:19.133721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:19.133742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:19.133752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:19.141732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:19.141752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:19.141762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.725 [2024-07-24 23:17:19.149843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.725 [2024-07-24 23:17:19.149864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.725 [2024-07-24 23:17:19.149875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.984 [2024-07-24 23:17:19.158148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.984 [2024-07-24 23:17:19.158169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.158180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.166823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.166843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.166859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.174668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.174689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.174700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.182744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.182764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.182775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.191393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.191413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.191423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.199319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.199339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.199349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.207420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.207440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.207450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.216153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.216173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.216183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.224208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.224229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.224239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.232068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.232089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.232099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.240537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.240561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.240572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.248848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.248869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.248879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.257611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.257632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.257642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.265712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.265738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.265748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.274014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.274035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.274045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.282679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.282698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.282709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.290668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.290688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.290698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.298840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.298860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.298870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.307197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.307217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.307228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.315357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.315377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.315387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.323621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.323642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.323652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.332170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.332191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.332202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.340179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.340199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.340210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.348153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.348173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.348183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.356329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.356349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.356360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.364908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.364928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.364938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.372759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.985 [2024-07-24 23:17:19.372779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.985 [2024-07-24 23:17:19.372790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.985 [2024-07-24 23:17:19.380837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.986 [2024-07-24 23:17:19.380857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.986 [2024-07-24 23:17:19.380871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.986 [2024-07-24 23:17:19.389327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.986 [2024-07-24 23:17:19.389348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.986 [2024-07-24 23:17:19.389358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.986 [2024-07-24 23:17:19.397252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.986 [2024-07-24 23:17:19.397272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.986 [2024-07-24 23:17:19.397283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.986 [2024-07-24 23:17:19.405511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:46.986 [2024-07-24 23:17:19.405531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.986 [2024-07-24 23:17:19.405542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.414274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.414295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.414306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.422464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.422484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.422494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.430734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.430755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.430765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.438707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.438733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.438744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.447246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.447266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.447277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.455198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.455222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.455233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.463378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.463400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.463410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.472063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.472086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.472097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.480004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.480025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.480035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.488081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.488103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.488114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.496800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.496822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.496832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.504870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.504892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.504902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.512919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.512942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.512954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.521767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.521789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.521799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.529875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.529897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.529907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.538088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.538110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.538121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.546912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.546933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.546944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.555152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.555174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.555184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.563119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.563141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.563152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.571890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.571911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.571921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.579724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.579745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.579755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.587926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.587947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.587957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.595929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.595950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.595964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.604512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.604533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.604544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.245 [2024-07-24 23:17:19.612696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.245 [2024-07-24 23:17:19.612722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.245 [2024-07-24 23:17:19.612734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.246 [2024-07-24 23:17:19.620837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.246 [2024-07-24 23:17:19.620858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.246 [2024-07-24 23:17:19.620869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.246 [2024-07-24 23:17:19.628821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.246 [2024-07-24 23:17:19.628842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.246 [2024-07-24 23:17:19.628852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.246 [2024-07-24 23:17:19.637234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.246 [2024-07-24 23:17:19.637255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.246 [2024-07-24 23:17:19.637266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.246 [2024-07-24 23:17:19.645287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.246 [2024-07-24 23:17:19.645307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.246 [2024-07-24 23:17:19.645318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.246 [2024-07-24 23:17:19.654140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.246 [2024-07-24 23:17:19.654161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.246 [2024-07-24 23:17:19.654171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.246 [2024-07-24 23:17:19.662143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.246 [2024-07-24 23:17:19.662164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.246 [2024-07-24 23:17:19.662174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.246 [2024-07-24 23:17:19.670273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.246 [2024-07-24 23:17:19.670298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.246 [2024-07-24 23:17:19.670309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.504 [2024-07-24 23:17:19.678709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.504 [2024-07-24 23:17:19.678736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.504 [2024-07-24 23:17:19.678747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.504 [2024-07-24 23:17:19.687107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.504 [2024-07-24 23:17:19.687127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.504 [2024-07-24 23:17:19.687137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.504 [2024-07-24 23:17:19.695358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.504 [2024-07-24 23:17:19.695379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.504 [2024-07-24 23:17:19.695390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.504 [2024-07-24 23:17:19.703207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.504 [2024-07-24 23:17:19.703229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.504 [2024-07-24 23:17:19.703240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.504 [2024-07-24 23:17:19.711553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.504 [2024-07-24 23:17:19.711574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.504 [2024-07-24 23:17:19.711585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.504 [2024-07-24 23:17:19.720266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.504 [2024-07-24 23:17:19.720287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.504 [2024-07-24 23:17:19.720298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.504 [2024-07-24 23:17:19.730253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.504 [2024-07-24 23:17:19.730274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.504 [2024-07-24 23:17:19.730285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.504 [2024-07-24 23:17:19.738658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.504 [2024-07-24 23:17:19.738679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.504 [2024-07-24 23:17:19.738690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.504 [2024-07-24 23:17:19.748609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.504 [2024-07-24 23:17:19.748630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.504 [2024-07-24 23:17:19.748641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.504 [2024-07-24 23:17:19.756289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.504 [2024-07-24 23:17:19.756311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.504 [2024-07-24 23:17:19.756322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.504 [2024-07-24 23:17:19.765483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.504 [2024-07-24 23:17:19.765505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.504 [2024-07-24 23:17:19.765516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.504 [2024-07-24 23:17:19.773750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.504 [2024-07-24 23:17:19.773771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.504 [2024-07-24 23:17:19.773782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.504 [2024-07-24 23:17:19.782435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.505 [2024-07-24 23:17:19.782456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.505 [2024-07-24 23:17:19.782467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.505 [2024-07-24 23:17:19.790670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.505 [2024-07-24 23:17:19.790691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.505 [2024-07-24 23:17:19.790702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.505 [2024-07-24 23:17:19.798638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.505 [2024-07-24 23:17:19.798659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.505 [2024-07-24 23:17:19.798670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.505 [2024-07-24 23:17:19.806668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.505 [2024-07-24 23:17:19.806689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.505 [2024-07-24 23:17:19.806699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.505 [2024-07-24 23:17:19.815499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.505 [2024-07-24 23:17:19.815523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.505 [2024-07-24 23:17:19.815534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.505 [2024-07-24 23:17:19.823684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.505 [2024-07-24 23:17:19.823706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.505 [2024-07-24 23:17:19.823721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.505 [2024-07-24 23:17:19.831658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.505 [2024-07-24 23:17:19.831679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.505 [2024-07-24 23:17:19.831690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.505 [2024-07-24 23:17:19.840243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.505 [2024-07-24 23:17:19.840263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.505 [2024-07-24 23:17:19.840274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.505 [2024-07-24 23:17:19.848198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.505 [2024-07-24 23:17:19.848218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.505 [2024-07-24 23:17:19.848229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.505 [2024-07-24 23:17:19.856334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.505 [2024-07-24 23:17:19.856355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.505 [2024-07-24 23:17:19.856366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.505 [2024-07-24 23:17:19.864403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.505 [2024-07-24 23:17:19.864424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.505 [2024-07-24 23:17:19.864434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.505 [2024-07-24 23:17:19.872908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.505 [2024-07-24 23:17:19.872929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.505 [2024-07-24 23:17:19.872939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.505 [2024-07-24 23:17:19.881137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.505 [2024-07-24 23:17:19.881158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.505 [2024-07-24 23:17:19.881169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.505 [2024-07-24 23:17:19.889141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.505 [2024-07-24 23:17:19.889162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.505 [2024-07-24 23:17:19.889173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.505 [2024-07-24 23:17:19.898254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.505 [2024-07-24 23:17:19.898275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.505 [2024-07-24 23:17:19.898286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.505 [2024-07-24 23:17:19.906269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.505 [2024-07-24 23:17:19.906290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.505 [2024-07-24 23:17:19.906301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.505 [2024-07-24 23:17:19.914363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.505 [2024-07-24 23:17:19.914384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.505 [2024-07-24 23:17:19.914395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.505 [2024-07-24 23:17:19.922336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.505 [2024-07-24 23:17:19.922357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.505 [2024-07-24 23:17:19.922368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.505 [2024-07-24 23:17:19.930422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.505 [2024-07-24 23:17:19.930444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.505 [2024-07-24 23:17:19.930454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.772 [2024-07-24 23:17:19.939362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.772 [2024-07-24 23:17:19.939384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.772 [2024-07-24 23:17:19.939394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.772 [2024-07-24 23:17:19.947444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.772 [2024-07-24 23:17:19.947465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.772 [2024-07-24 23:17:19.947475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.772 [2024-07-24 23:17:19.955564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.772 [2024-07-24 23:17:19.955585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.772 [2024-07-24 23:17:19.955599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.772 [2024-07-24 23:17:19.964245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.772 [2024-07-24 23:17:19.964266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.772 [2024-07-24 23:17:19.964276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.772 [2024-07-24 23:17:19.972253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.772 [2024-07-24 23:17:19.972274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.772 [2024-07-24 23:17:19.972284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.772 [2024-07-24 23:17:19.980211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.772 [2024-07-24 23:17:19.980232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.772 [2024-07-24 23:17:19.980243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.772 [2024-07-24 23:17:19.988944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.772 [2024-07-24 23:17:19.988966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.772 [2024-07-24 23:17:19.988976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.772 [2024-07-24 23:17:19.996879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.772 [2024-07-24 23:17:19.996900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.772 [2024-07-24 23:17:19.996911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.772 [2024-07-24 23:17:20.005202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.772 [2024-07-24 23:17:20.005223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.772 [2024-07-24 23:17:20.005235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.772 [2024-07-24 23:17:20.013313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.772 [2024-07-24 23:17:20.013335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.772 [2024-07-24 23:17:20.013346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.772 [2024-07-24 23:17:20.022127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.772 [2024-07-24 23:17:20.022149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.772 [2024-07-24 23:17:20.022160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.772 [2024-07-24 23:17:20.031879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.772 [2024-07-24 23:17:20.031908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.772 [2024-07-24 23:17:20.031920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.772 [2024-07-24 23:17:20.040822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.772 [2024-07-24 23:17:20.040845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.772 [2024-07-24 23:17:20.040856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.773 [2024-07-24 23:17:20.048860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.773 [2024-07-24 23:17:20.048882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.773 [2024-07-24 23:17:20.048894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.773 [2024-07-24 23:17:20.057257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.773 [2024-07-24 23:17:20.057280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.773 [2024-07-24 23:17:20.057292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.773 [2024-07-24 23:17:20.066223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.773 [2024-07-24 23:17:20.066246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.773 [2024-07-24 23:17:20.066257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.773 [2024-07-24 23:17:20.074610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.773 [2024-07-24 23:17:20.074633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.773 [2024-07-24 23:17:20.074644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.773 [2024-07-24 23:17:20.082848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.773 [2024-07-24 23:17:20.082870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.773 [2024-07-24 23:17:20.082881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.773 [2024-07-24 23:17:20.093484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.773 [2024-07-24 23:17:20.093508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.773 [2024-07-24 23:17:20.093519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.773 [2024-07-24 23:17:20.101559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.773 [2024-07-24 23:17:20.101582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.773 [2024-07-24 23:17:20.101593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.773 [2024-07-24 23:17:20.111687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.773 [2024-07-24 23:17:20.111709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.773 [2024-07-24 23:17:20.111725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.773 [2024-07-24 23:17:20.119274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.773 [2024-07-24 23:17:20.119296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.773 [2024-07-24 23:17:20.119307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.773 [2024-07-24 23:17:20.128839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.773 [2024-07-24 23:17:20.128862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.773 [2024-07-24 23:17:20.128872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.773 [2024-07-24 23:17:20.138014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.773 [2024-07-24 23:17:20.138036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.773 [2024-07-24 23:17:20.138047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.773 [2024-07-24 23:17:20.146262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.773 [2024-07-24 23:17:20.146283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.773 [2024-07-24 23:17:20.146294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.773 [2024-07-24 23:17:20.158063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.773 [2024-07-24 23:17:20.158085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.773 [2024-07-24 23:17:20.158095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.773 [2024-07-24 23:17:20.168908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.773 [2024-07-24 23:17:20.168929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.773 [2024-07-24 23:17:20.168940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.773 [2024-07-24 23:17:20.176880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.773 [2024-07-24 23:17:20.176901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.773 [2024-07-24 23:17:20.176912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.773 [2024-07-24 23:17:20.187544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.773 [2024-07-24 23:17:20.187565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.773 [2024-07-24 23:17:20.187579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.773 [2024-07-24 23:17:20.195621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:47.773 [2024-07-24 23:17:20.195643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.773 [2024-07-24 23:17:20.195654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.072 [2024-07-24 23:17:20.203996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.072 [2024-07-24 23:17:20.204017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.072 [2024-07-24 23:17:20.204028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.072 [2024-07-24 23:17:20.212087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.072 [2024-07-24 23:17:20.212108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.072 [2024-07-24 23:17:20.212119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.072 [2024-07-24 23:17:20.221066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.072 [2024-07-24 23:17:20.221087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.072 [2024-07-24 23:17:20.221098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.072 [2024-07-24 23:17:20.229709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.072 [2024-07-24 23:17:20.229735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.072 [2024-07-24 23:17:20.229746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.072 [2024-07-24 23:17:20.236963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.072 [2024-07-24 23:17:20.236984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.072 [2024-07-24 23:17:20.236995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.072 [2024-07-24 23:17:20.246855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.072 [2024-07-24 23:17:20.246875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.072 [2024-07-24 23:17:20.246886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.072 [2024-07-24 23:17:20.254445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.072 [2024-07-24 23:17:20.254466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.072 [2024-07-24 23:17:20.254476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.072 [2024-07-24 23:17:20.264173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.072 [2024-07-24 23:17:20.264201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.072 [2024-07-24 23:17:20.264211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.072 [2024-07-24 23:17:20.271615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.072 [2024-07-24 23:17:20.271636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.072 [2024-07-24 23:17:20.271647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.072 [2024-07-24 23:17:20.282089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.072 [2024-07-24 23:17:20.282110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.072 [2024-07-24 23:17:20.282122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.072 [2024-07-24 23:17:20.289565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.072 [2024-07-24 23:17:20.289586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.072 [2024-07-24 23:17:20.289597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.072 [2024-07-24 23:17:20.300189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.072 [2024-07-24 23:17:20.300210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.072 [2024-07-24 23:17:20.300221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.072 [2024-07-24 23:17:20.309213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.072 [2024-07-24 23:17:20.309233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.072 [2024-07-24 23:17:20.309243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.072 [2024-07-24 23:17:20.318089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.072 [2024-07-24 23:17:20.318109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.072 [2024-07-24 23:17:20.318120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.072 [2024-07-24 23:17:20.325970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.072 [2024-07-24 23:17:20.325991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.072 [2024-07-24 23:17:20.326002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.072 [2024-07-24 23:17:20.334885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.072 [2024-07-24 23:17:20.334905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.072 [2024-07-24 23:17:20.334916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.072 [2024-07-24 23:17:20.343228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.072 [2024-07-24 23:17:20.343248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.072 [2024-07-24 23:17:20.343259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.072 [2024-07-24 23:17:20.351405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.072 [2024-07-24 23:17:20.351426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.072 [2024-07-24 23:17:20.351437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.072 [2024-07-24 23:17:20.359612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.072 [2024-07-24 23:17:20.359633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.073 [2024-07-24 23:17:20.359644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.073 [2024-07-24 23:17:20.368615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.073 [2024-07-24 23:17:20.368636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.073 [2024-07-24 23:17:20.368646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.073 [2024-07-24 23:17:20.376804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.073 [2024-07-24 23:17:20.376824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.073 [2024-07-24 23:17:20.376835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.073 [2024-07-24 23:17:20.385025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.073 [2024-07-24 23:17:20.385046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.073 [2024-07-24 23:17:20.385057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.073 [2024-07-24 23:17:20.393994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.073 [2024-07-24 23:17:20.394015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.073 [2024-07-24 23:17:20.394026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.073 [2024-07-24 23:17:20.402094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.073 [2024-07-24 23:17:20.402114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.073 [2024-07-24 23:17:20.402125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.073 [2024-07-24 23:17:20.410256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.073 [2024-07-24 23:17:20.410277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.073 [2024-07-24 23:17:20.410290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.073 [2024-07-24 23:17:20.419271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.073 [2024-07-24 23:17:20.419291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.073 [2024-07-24 23:17:20.419302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.073 [2024-07-24 23:17:20.427369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.073 [2024-07-24 23:17:20.427389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.073 [2024-07-24 23:17:20.427399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.073 [2024-07-24 23:17:20.435593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.073 [2024-07-24 23:17:20.435614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.073 [2024-07-24 23:17:20.435625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.073 [2024-07-24 23:17:20.444542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.073 [2024-07-24 23:17:20.444562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.073 [2024-07-24 23:17:20.444573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.073 [2024-07-24 23:17:20.452855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.073 [2024-07-24 23:17:20.452876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.073 [2024-07-24 23:17:20.452887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.073 [2024-07-24 23:17:20.461301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.073 [2024-07-24 23:17:20.461322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.073 [2024-07-24 23:17:20.461332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.073 [2024-07-24 23:17:20.469629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.073 [2024-07-24 23:17:20.469650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.073 [2024-07-24 23:17:20.469660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.073 [2024-07-24 23:17:20.478354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.073 [2024-07-24 23:17:20.478375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.073 [2024-07-24 23:17:20.478385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.073 [2024-07-24 23:17:20.486510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.073 [2024-07-24 23:17:20.486530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.073 [2024-07-24 23:17:20.486541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.073 [2024-07-24 23:17:20.494872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.073 [2024-07-24 23:17:20.494901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.073 [2024-07-24 23:17:20.494912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.332 [2024-07-24 23:17:20.503623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.332 [2024-07-24 23:17:20.503644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.332 [2024-07-24 23:17:20.503654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.332 [2024-07-24 23:17:20.511894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.332 [2024-07-24 23:17:20.511915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.332 [2024-07-24 23:17:20.511926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.332 [2024-07-24 23:17:20.520103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.332 [2024-07-24 23:17:20.520125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.332 [2024-07-24 23:17:20.520135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.332 [2024-07-24 23:17:20.528893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.332 [2024-07-24 23:17:20.528913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.332 [2024-07-24 23:17:20.528924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.332 [2024-07-24 23:17:20.537131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.332 [2024-07-24 23:17:20.537152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.332 [2024-07-24 23:17:20.537162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.332 [2024-07-24 23:17:20.545444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.332 [2024-07-24 23:17:20.545465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.332 [2024-07-24 23:17:20.545476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.332 [2024-07-24 23:17:20.554048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.332 [2024-07-24 23:17:20.554069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.332 [2024-07-24 23:17:20.554083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.332 [2024-07-24 23:17:20.562089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.332 [2024-07-24 23:17:20.562109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.332 [2024-07-24 23:17:20.562119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.332 [2024-07-24 23:17:20.570185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.332 [2024-07-24 23:17:20.570205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.332 [2024-07-24 23:17:20.570215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.332 [2024-07-24 23:17:20.578562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.332 [2024-07-24 23:17:20.578583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.332 [2024-07-24 23:17:20.578593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.332 [2024-07-24 23:17:20.586643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.332 [2024-07-24 23:17:20.586665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.332 [2024-07-24 23:17:20.586676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.332 [2024-07-24 23:17:20.594594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.332 [2024-07-24 23:17:20.594615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.332 [2024-07-24 23:17:20.594625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.332 [2024-07-24 23:17:20.603383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.332 [2024-07-24 23:17:20.603404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.332 [2024-07-24 23:17:20.603414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.332 [2024-07-24 23:17:20.611361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.332 [2024-07-24 23:17:20.611381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.332 [2024-07-24 23:17:20.611391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.332 [2024-07-24 23:17:20.619535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.332 [2024-07-24 23:17:20.619555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.332 [2024-07-24 23:17:20.619566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.332 [2024-07-24 23:17:20.627954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.332 [2024-07-24 23:17:20.627978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.332 [2024-07-24 23:17:20.627988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.332 [2024-07-24 23:17:20.636064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.332 [2024-07-24 23:17:20.636085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.332 [2024-07-24 23:17:20.636095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.332 [2024-07-24 23:17:20.644216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.332 [2024-07-24 23:17:20.644237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.333 [2024-07-24 23:17:20.644247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.333 [2024-07-24 23:17:20.652755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.333 [2024-07-24 23:17:20.652774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.333 [2024-07-24 23:17:20.652785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.333 [2024-07-24 23:17:20.660968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.333 [2024-07-24 23:17:20.660988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.333 [2024-07-24 23:17:20.660998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.333 [2024-07-24 23:17:20.668921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.333 [2024-07-24 23:17:20.668941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.333 [2024-07-24 23:17:20.668951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.333 [2024-07-24 23:17:20.677319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.333 [2024-07-24 23:17:20.677339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.333 [2024-07-24 23:17:20.677349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.333 [2024-07-24 23:17:20.685471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.333 [2024-07-24 23:17:20.685492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.333 [2024-07-24 23:17:20.685502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.333 [2024-07-24 23:17:20.693455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.333 [2024-07-24 23:17:20.693475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.333 [2024-07-24 23:17:20.693486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.333 [2024-07-24 23:17:20.701695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.333 [2024-07-24 23:17:20.701720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.333 [2024-07-24 23:17:20.701731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.333 [2024-07-24 23:17:20.710560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.333 [2024-07-24 23:17:20.710580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.333 [2024-07-24 23:17:20.710591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.333 [2024-07-24 23:17:20.718580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.333 [2024-07-24 23:17:20.718601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.333 [2024-07-24 23:17:20.718612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.333 [2024-07-24 23:17:20.726508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc30d70) 00:31:48.333 [2024-07-24 23:17:20.726528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.333 [2024-07-24 23:17:20.726539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.333 00:31:48.333 Latency(us) 00:31:48.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.333 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:48.333 nvme0n1 : 2.00 30306.75 118.39 0.00 0.00 4219.08 2031.62 12215.91 00:31:48.333 =================================================================================================================== 00:31:48.333 Total : 30306.75 118.39 0.00 0.00 4219.08 2031.62 12215.91 00:31:48.333 0 00:31:48.333 23:17:20 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:48.333 23:17:20 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:48.333 23:17:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:48.333 23:17:20 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:48.333 | .driver_specific 00:31:48.333 | .nvme_error 00:31:48.333 | .status_code 00:31:48.333 | .command_transient_transport_error' 00:31:48.591 23:17:20 -- host/digest.sh@71 -- # (( 237 > 0 )) 00:31:48.591 23:17:20 -- host/digest.sh@73 -- # killprocess 3399744 00:31:48.591 23:17:20 -- common/autotest_common.sh@926 -- # '[' -z 3399744 ']' 00:31:48.591 23:17:20 -- common/autotest_common.sh@930 -- # kill -0 3399744 00:31:48.591 23:17:20 -- common/autotest_common.sh@931 -- # uname 00:31:48.591 23:17:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:48.591 23:17:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3399744 00:31:48.591 23:17:20 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:48.591 23:17:20 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:48.591 23:17:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3399744' 00:31:48.591 killing process with pid 3399744 00:31:48.591 23:17:20 -- common/autotest_common.sh@945 -- # kill 3399744 00:31:48.591 Received shutdown signal, test time was about 2.000000 seconds 00:31:48.591 00:31:48.591 Latency(us) 00:31:48.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.591 =================================================================================================================== 00:31:48.591 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:48.591 23:17:20 -- common/autotest_common.sh@950 -- # wait 3399744 00:31:48.848 23:17:21 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:31:48.848 23:17:21 -- host/digest.sh@54 -- # local rw bs qd 00:31:48.848 23:17:21 -- host/digest.sh@56 -- # rw=randread 00:31:48.848 23:17:21 -- host/digest.sh@56 -- # bs=131072 00:31:48.848 23:17:21 -- host/digest.sh@56 -- # qd=16 00:31:48.848 23:17:21 -- host/digest.sh@58 -- # bperfpid=3400322 00:31:48.848 23:17:21 -- host/digest.sh@60 -- # waitforlisten 3400322 /var/tmp/bperf.sock 00:31:48.848 23:17:21 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:31:48.848 23:17:21 -- common/autotest_common.sh@819 -- # '[' -z 3400322 ']' 00:31:48.848 23:17:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:48.848 23:17:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:48.848 23:17:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:48.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:48.848 23:17:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:48.848 23:17:21 -- common/autotest_common.sh@10 -- # set +x 00:31:48.848 [2024-07-24 23:17:21.201764] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:48.848 [2024-07-24 23:17:21.201821] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400322 ] 00:31:48.848 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:48.848 Zero copy mechanism will not be used. 00:31:48.848 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.848 [2024-07-24 23:17:21.272003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.106 [2024-07-24 23:17:21.308538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:49.670 23:17:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:49.670 23:17:22 -- common/autotest_common.sh@852 -- # return 0 00:31:49.670 23:17:22 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:49.670 23:17:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:49.928 23:17:22 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:49.928 23:17:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.928 23:17:22 -- common/autotest_common.sh@10 -- # set +x 00:31:49.928 23:17:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.928 23:17:22 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:49.928 23:17:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:50.186 nvme0n1 00:31:50.186 23:17:22 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:50.187 23:17:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:50.187 23:17:22 -- common/autotest_common.sh@10 -- # set +x 00:31:50.187 23:17:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:50.187 23:17:22 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:50.187 23:17:22 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:50.187 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:50.187 Zero copy mechanism will not be used. 00:31:50.187 Running I/O for 2 seconds... 00:31:50.187 [2024-07-24 23:17:22.584608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.187 [2024-07-24 23:17:22.584642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.187 [2024-07-24 23:17:22.584659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.187 [2024-07-24 23:17:22.596597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.187 [2024-07-24 23:17:22.596622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.187 [2024-07-24 23:17:22.596633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.187 [2024-07-24 23:17:22.606692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.187 [2024-07-24 23:17:22.606720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.187 [2024-07-24 23:17:22.606731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.187 [2024-07-24 23:17:22.616026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.187 [2024-07-24 23:17:22.616047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.187 [2024-07-24 23:17:22.616058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.445 [2024-07-24 23:17:22.625405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.445 [2024-07-24 23:17:22.625429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.445 [2024-07-24 23:17:22.625440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.445 [2024-07-24 23:17:22.634153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.445 [2024-07-24 23:17:22.634177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.445 [2024-07-24 23:17:22.634188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.445 [2024-07-24 23:17:22.643408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.445 [2024-07-24 23:17:22.643431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.445 [2024-07-24 23:17:22.643442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.445 [2024-07-24 23:17:22.653976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.445 [2024-07-24 23:17:22.653998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.654009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.663334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.663357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.663368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.672508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.672534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.672545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.682544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.682567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.682577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.691164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.691186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.691197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.702308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.702331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.702342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.716185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.716208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.716218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.726368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.726390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.726401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.736241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.736264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.736276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.744578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.744600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.744610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.752531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.752554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.752568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.759803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.759836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.759847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.766424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.766447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.766457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.772837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.772858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.772868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.779243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.779264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.779274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.785763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.785784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.785794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.792056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.792077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.792088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.798811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.798832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.798842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.806414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.806436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.806447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.814255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.814281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.814291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.821450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.821472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.821483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.827996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.828017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.828028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.834258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.834280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.834290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.840674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.840696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.840706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.846727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.846749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.846760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.852418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.852441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.852451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.858461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.446 [2024-07-24 23:17:22.858484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.446 [2024-07-24 23:17:22.858495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.446 [2024-07-24 23:17:22.865108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.447 [2024-07-24 23:17:22.865131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.447 [2024-07-24 23:17:22.865141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.447 [2024-07-24 23:17:22.871653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.447 [2024-07-24 23:17:22.871676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.447 [2024-07-24 23:17:22.871686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.705 [2024-07-24 23:17:22.877649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.705 [2024-07-24 23:17:22.877672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.705 [2024-07-24 23:17:22.877682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.705 [2024-07-24 23:17:22.885064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.705 [2024-07-24 23:17:22.885087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.705 [2024-07-24 23:17:22.885098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.705 [2024-07-24 23:17:22.892754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.705 [2024-07-24 23:17:22.892777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.705 [2024-07-24 23:17:22.892787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.705 [2024-07-24 23:17:22.899774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.705 [2024-07-24 23:17:22.899796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.705 [2024-07-24 23:17:22.899807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.705 [2024-07-24 23:17:22.907282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.705 [2024-07-24 23:17:22.907306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.705 [2024-07-24 23:17:22.907317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.705 [2024-07-24 23:17:22.917739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.705 [2024-07-24 23:17:22.917762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.705 [2024-07-24 23:17:22.917773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.705 [2024-07-24 23:17:22.925968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.705 [2024-07-24 23:17:22.925992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.705 [2024-07-24 23:17:22.926002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.705 [2024-07-24 23:17:22.935322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.705 [2024-07-24 23:17:22.935345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.705 [2024-07-24 23:17:22.935362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.705 [2024-07-24 23:17:22.944357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.705 [2024-07-24 23:17:22.944380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.705 [2024-07-24 23:17:22.944391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.705 [2024-07-24 23:17:22.952982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.705 [2024-07-24 23:17:22.953006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.706 [2024-07-24 23:17:22.953017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.706 [2024-07-24 23:17:22.961107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.706 [2024-07-24 23:17:22.961132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.706 [2024-07-24 23:17:22.961143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.706 [2024-07-24 23:17:22.970042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.706 [2024-07-24 23:17:22.970066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.706 [2024-07-24 23:17:22.970077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.706 [2024-07-24 23:17:22.978243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.706 [2024-07-24 23:17:22.978267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.706 [2024-07-24 23:17:22.978278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.706 [2024-07-24 23:17:22.987386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.706 [2024-07-24 23:17:22.987411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.706 [2024-07-24 23:17:22.987422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.706 [2024-07-24 23:17:22.997852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.706 [2024-07-24 23:17:22.997876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.706 [2024-07-24 23:17:22.997887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.706 [2024-07-24 23:17:23.007738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.706 [2024-07-24 23:17:23.007761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.706 [2024-07-24 23:17:23.007772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.706 [2024-07-24 23:17:23.017574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.706 [2024-07-24 23:17:23.017601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.706 [2024-07-24 23:17:23.017612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.706 [2024-07-24 23:17:23.027690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.706 [2024-07-24 23:17:23.027719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.706 [2024-07-24 23:17:23.027730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.706 [2024-07-24 23:17:23.038753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.706 [2024-07-24 23:17:23.038776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.706 [2024-07-24 23:17:23.038787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.706 [2024-07-24 23:17:23.049424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.706 [2024-07-24 23:17:23.049448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.706 [2024-07-24 23:17:23.049459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.706 [2024-07-24 23:17:23.060142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.706 [2024-07-24 23:17:23.060167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.706 [2024-07-24 23:17:23.060177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.706 [2024-07-24 23:17:23.069397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.706 [2024-07-24 23:17:23.069421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.706 [2024-07-24 23:17:23.069432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.706 [2024-07-24 23:17:23.079470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.706 [2024-07-24 23:17:23.079493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.706 [2024-07-24 23:17:23.079504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.706 [2024-07-24 23:17:23.089469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.706 [2024-07-24 23:17:23.089493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.706 [2024-07-24 23:17:23.089504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.706 [2024-07-24 23:17:23.099501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.706 [2024-07-24 23:17:23.099525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.706 [2024-07-24 23:17:23.099535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.706 [2024-07-24 23:17:23.108870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.706 [2024-07-24 23:17:23.108893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.706 [2024-07-24 23:17:23.108904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.706 [2024-07-24 23:17:23.118624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.706 [2024-07-24 23:17:23.118648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.706 [2024-07-24 23:17:23.118660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.706 [2024-07-24 23:17:23.128170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.706 [2024-07-24 23:17:23.128194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.706 [2024-07-24 23:17:23.128204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.965 [2024-07-24 23:17:23.137827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.965 [2024-07-24 23:17:23.137851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.965 [2024-07-24 23:17:23.137862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.965 [2024-07-24 23:17:23.146771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.965 [2024-07-24 23:17:23.146794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.965 [2024-07-24 23:17:23.146805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.965 [2024-07-24 23:17:23.156370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.965 [2024-07-24 23:17:23.156394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.965 [2024-07-24 23:17:23.156404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.965 [2024-07-24 23:17:23.166658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.965 [2024-07-24 23:17:23.166681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.965 [2024-07-24 23:17:23.166691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.965 [2024-07-24 23:17:23.176366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.965 [2024-07-24 23:17:23.176389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.965 [2024-07-24 23:17:23.176400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.965 [2024-07-24 23:17:23.185997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.965 [2024-07-24 23:17:23.186020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.965 [2024-07-24 23:17:23.186035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.965 [2024-07-24 23:17:23.194753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.965 [2024-07-24 23:17:23.194777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.965 [2024-07-24 23:17:23.194788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.965 [2024-07-24 23:17:23.202868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.965 [2024-07-24 23:17:23.202892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.965 [2024-07-24 23:17:23.202903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.965 [2024-07-24 23:17:23.211088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.211111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.211122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.220281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.220306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.220317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.230383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.230407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.230418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.240387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.240412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.240423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.251857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.251881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.251892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.263987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.264011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.264021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.273610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.273633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.273644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.282363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.282387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.282397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.290849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.290872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.290883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.298761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.298783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.298794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.306056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.306078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.306089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.312758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.312780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.312790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.318920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.318942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.318953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.324894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.324916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.324927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.330012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.330034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.330048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.335422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.335445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.335455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.341467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.341488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.341498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.347660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.347682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.347692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.353895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.353917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.353927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.359946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.359969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.359979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.366138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.366160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.366170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.372226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.372248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.372259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.378015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.378037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.378048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.384045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.384071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.384081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:50.966 [2024-07-24 23:17:23.389626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:50.966 [2024-07-24 23:17:23.389648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.966 [2024-07-24 23:17:23.389658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.225 [2024-07-24 23:17:23.395033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.225 [2024-07-24 23:17:23.395055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.225 [2024-07-24 23:17:23.395065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.225 [2024-07-24 23:17:23.400504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.225 [2024-07-24 23:17:23.400526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.225 [2024-07-24 23:17:23.400536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.225 [2024-07-24 23:17:23.406952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.225 [2024-07-24 23:17:23.406975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.225 [2024-07-24 23:17:23.406985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.225 [2024-07-24 23:17:23.413106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.225 [2024-07-24 23:17:23.413128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.225 [2024-07-24 23:17:23.413138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.225 [2024-07-24 23:17:23.419398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.225 [2024-07-24 23:17:23.419420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.225 [2024-07-24 23:17:23.419430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.225 [2024-07-24 23:17:23.425507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.225 [2024-07-24 23:17:23.425529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.225 [2024-07-24 23:17:23.425540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.225 [2024-07-24 23:17:23.431581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.225 [2024-07-24 23:17:23.431604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.225 [2024-07-24 23:17:23.431614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.225 [2024-07-24 23:17:23.437649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.225 [2024-07-24 23:17:23.437671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.225 [2024-07-24 23:17:23.437681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.225 [2024-07-24 23:17:23.443687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.225 [2024-07-24 23:17:23.443709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.225 [2024-07-24 23:17:23.443724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.225 [2024-07-24 23:17:23.449703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.225 [2024-07-24 23:17:23.449729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.225 [2024-07-24 23:17:23.449739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.225 [2024-07-24 23:17:23.455839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.225 [2024-07-24 23:17:23.455862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.225 [2024-07-24 23:17:23.455872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.225 [2024-07-24 23:17:23.461879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.225 [2024-07-24 23:17:23.461901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.225 [2024-07-24 23:17:23.461911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.225 [2024-07-24 23:17:23.467943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.225 [2024-07-24 23:17:23.467966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.225 [2024-07-24 23:17:23.467976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.225 [2024-07-24 23:17:23.473925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.225 [2024-07-24 23:17:23.473947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.225 [2024-07-24 23:17:23.473957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.225 [2024-07-24 23:17:23.480025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.225 [2024-07-24 23:17:23.480047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.225 [2024-07-24 23:17:23.480057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.225 [2024-07-24 23:17:23.486182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.225 [2024-07-24 23:17:23.486204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.225 [2024-07-24 23:17:23.486217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.225 [2024-07-24 23:17:23.492363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.225 [2024-07-24 23:17:23.492383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.225 [2024-07-24 23:17:23.492393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.225 [2024-07-24 23:17:23.498391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.225 [2024-07-24 23:17:23.498413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.225 [2024-07-24 23:17:23.498423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.225 [2024-07-24 23:17:23.504546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.225 [2024-07-24 23:17:23.504567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.225 [2024-07-24 23:17:23.504578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.225 [2024-07-24 23:17:23.510637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.225 [2024-07-24 23:17:23.510659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.225 [2024-07-24 23:17:23.510669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.225 [2024-07-24 23:17:23.516774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.225 [2024-07-24 23:17:23.516796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.516806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.522804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.522825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.522836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.528997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.529020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.529030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.535143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.535165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.535175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.541362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.541387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.541397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.547406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.547429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.547439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.553470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.553492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.553502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.559541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.559563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.559573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.565649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.565671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.565681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.571684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.571706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.571723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.577797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.577819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.577829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.583765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.583787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.583797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.589824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.589846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.589856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.596013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.596036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.596046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.602196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.602218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.602228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.608466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.608488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.608498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.614736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.614759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.614770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.620962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.620984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.620994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.627182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.627205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.627214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.633434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.633455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.633465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.639583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.639604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.639614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.645689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.645711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.645731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.226 [2024-07-24 23:17:23.651730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.226 [2024-07-24 23:17:23.651753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.226 [2024-07-24 23:17:23.651764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.657795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.657819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.657829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.663865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.663887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.663898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.669913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.669935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.669945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.675983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.676005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.676015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.682121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.682144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.682154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.688205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.688227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.688237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.694306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.694328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.694338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.700370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.700393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.700403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.706688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.706709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.706725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.712744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.712765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.712775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.718833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.718855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.718865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.724913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.724935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.724945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.731045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.731066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.731076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.737177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.737199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.737209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.743403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.743424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.743437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.749536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.749558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.749571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.755677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.755699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.755709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.761707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.761737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.761747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.767824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.767846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.767857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.773959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.773982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.773992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.780108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.780130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.780140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.786232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.485 [2024-07-24 23:17:23.786254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.485 [2024-07-24 23:17:23.786264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.485 [2024-07-24 23:17:23.792280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.486 [2024-07-24 23:17:23.792301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.486 [2024-07-24 23:17:23.792312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.486 [2024-07-24 23:17:23.798314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.486 [2024-07-24 23:17:23.798336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.486 [2024-07-24 23:17:23.798346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.486 [2024-07-24 23:17:23.804340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.486 [2024-07-24 23:17:23.804365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.486 [2024-07-24 23:17:23.804375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.486 [2024-07-24 23:17:23.810470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.486 [2024-07-24 23:17:23.810492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.486 [2024-07-24 23:17:23.810503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.486 [2024-07-24 23:17:23.816607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.486 [2024-07-24 23:17:23.816629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.486 [2024-07-24 23:17:23.816639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.486 [2024-07-24 23:17:23.822768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.486 [2024-07-24 23:17:23.822790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.486 [2024-07-24 23:17:23.822800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.486 [2024-07-24 23:17:23.828826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.486 [2024-07-24 23:17:23.828848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.486 [2024-07-24 23:17:23.828858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.486 [2024-07-24 23:17:23.835011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.486 [2024-07-24 23:17:23.835033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.486 [2024-07-24 23:17:23.835043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.486 [2024-07-24 23:17:23.841032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.486 [2024-07-24 23:17:23.841054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.486 [2024-07-24 23:17:23.841064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.486 [2024-07-24 23:17:23.847128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.486 [2024-07-24 23:17:23.847150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.486 [2024-07-24 23:17:23.847160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.486 [2024-07-24 23:17:23.853168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.486 [2024-07-24 23:17:23.853189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.486 [2024-07-24 23:17:23.853200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.486 [2024-07-24 23:17:23.859186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.486 [2024-07-24 23:17:23.859208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.486 [2024-07-24 23:17:23.859218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.486 [2024-07-24 23:17:23.865298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.486 [2024-07-24 23:17:23.865321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.486 [2024-07-24 23:17:23.865331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.486 [2024-07-24 23:17:23.871390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.486 [2024-07-24 23:17:23.871412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.486 [2024-07-24 23:17:23.871422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.486 [2024-07-24 23:17:23.877463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.486 [2024-07-24 23:17:23.877485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.486 [2024-07-24 23:17:23.877496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.486 [2024-07-24 23:17:23.883677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.486 [2024-07-24 23:17:23.883699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.486 [2024-07-24 23:17:23.883709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.486 [2024-07-24 23:17:23.889836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.486 [2024-07-24 23:17:23.889859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.486 [2024-07-24 23:17:23.889869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.486 [2024-07-24 23:17:23.896499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.486 [2024-07-24 23:17:23.896521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.486 [2024-07-24 23:17:23.896531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.486 [2024-07-24 23:17:23.903199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.486 [2024-07-24 23:17:23.903221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.486 [2024-07-24 23:17:23.903231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.486 [2024-07-24 23:17:23.909418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.486 [2024-07-24 23:17:23.909440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.486 [2024-07-24 23:17:23.909454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.744 [2024-07-24 23:17:23.915601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.744 [2024-07-24 23:17:23.915624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.744 [2024-07-24 23:17:23.915635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.744 [2024-07-24 23:17:23.921781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.744 [2024-07-24 23:17:23.921803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.744 [2024-07-24 23:17:23.921814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.744 [2024-07-24 23:17:23.927825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:23.927847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:23.927858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:23.933908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:23.933931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:23.933941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:23.939901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:23.939924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:23.939933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:23.945930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:23.945952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:23.945962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:23.952067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:23.952089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:23.952099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:23.958156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:23.958178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:23.958188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:23.964222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:23.964244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:23.964254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:23.970300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:23.970322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:23.970332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:23.976342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:23.976364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:23.976375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:23.982394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:23.982416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:23.982426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:23.988475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:23.988497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:23.988508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:23.994606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:23.994628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:23.994639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:24.000780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:24.000802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:24.000813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:24.006931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:24.006954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:24.006964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:24.013085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:24.013107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:24.013120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:24.019216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:24.019238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:24.019248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:24.025367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:24.025389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:24.025399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:24.031520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:24.031542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:24.031553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:24.037721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:24.037743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:24.037753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:24.044026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:24.044049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:24.044059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:24.050179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:24.050201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:24.050212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:24.056432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.745 [2024-07-24 23:17:24.056454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.745 [2024-07-24 23:17:24.056465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.745 [2024-07-24 23:17:24.062594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.746 [2024-07-24 23:17:24.062616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.746 [2024-07-24 23:17:24.062627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.746 [2024-07-24 23:17:24.068826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.746 [2024-07-24 23:17:24.068851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.746 [2024-07-24 23:17:24.068861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.746 [2024-07-24 23:17:24.075019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.746 [2024-07-24 23:17:24.075041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.746 [2024-07-24 23:17:24.075051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.746 [2024-07-24 23:17:24.081189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.746 [2024-07-24 23:17:24.081212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.746 [2024-07-24 23:17:24.081222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.746 [2024-07-24 23:17:24.087297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.746 [2024-07-24 23:17:24.087319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.746 [2024-07-24 23:17:24.087329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.746 [2024-07-24 23:17:24.093463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.746 [2024-07-24 23:17:24.093485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.746 [2024-07-24 23:17:24.093495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.746 [2024-07-24 23:17:24.099642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.746 [2024-07-24 23:17:24.099664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.746 [2024-07-24 23:17:24.099675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.746 [2024-07-24 23:17:24.105778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.746 [2024-07-24 23:17:24.105801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.746 [2024-07-24 23:17:24.105811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.746 [2024-07-24 23:17:24.112024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.746 [2024-07-24 23:17:24.112045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.746 [2024-07-24 23:17:24.112055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.746 [2024-07-24 23:17:24.118264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.746 [2024-07-24 23:17:24.118287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.746 [2024-07-24 23:17:24.118297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.746 [2024-07-24 23:17:24.124465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.746 [2024-07-24 23:17:24.124485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.746 [2024-07-24 23:17:24.124496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.746 [2024-07-24 23:17:24.130601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.746 [2024-07-24 23:17:24.130624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.746 [2024-07-24 23:17:24.130634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.746 [2024-07-24 23:17:24.136808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.746 [2024-07-24 23:17:24.136830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.746 [2024-07-24 23:17:24.136840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.746 [2024-07-24 23:17:24.143027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.746 [2024-07-24 23:17:24.143049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.746 [2024-07-24 23:17:24.143059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.746 [2024-07-24 23:17:24.149210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.746 [2024-07-24 23:17:24.149232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.746 [2024-07-24 23:17:24.149243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.746 [2024-07-24 23:17:24.155352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.746 [2024-07-24 23:17:24.155374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.746 [2024-07-24 23:17:24.155384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.746 [2024-07-24 23:17:24.161490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.746 [2024-07-24 23:17:24.161512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.746 [2024-07-24 23:17:24.161522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.746 [2024-07-24 23:17:24.167675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.746 [2024-07-24 23:17:24.167697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.746 [2024-07-24 23:17:24.167707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.746 [2024-07-24 23:17:24.173797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:51.746 [2024-07-24 23:17:24.173820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.173833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.180038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.180060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.180070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.186294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.186316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.186326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.192426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.192448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.192459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.198538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.198559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.198570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.205980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.206003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.206013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.214699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.214728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.214739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.222654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.222676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.222687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.230368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.230391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.230401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.238935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.238961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.238972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.247356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.247379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.247390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.256595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.256619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.256630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.264866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.264889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.264900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.273058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.273082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.273093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.281739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.281762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.281773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.288871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.288896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.288906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.295590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.295612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.295623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.301866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.301888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.301898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.307195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.307218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.307230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.312773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.312797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.312808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.317182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.317204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.317215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.321131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.321154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.321164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.327630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.327655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.327666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.334666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.334690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.334701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.342279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.005 [2024-07-24 23:17:24.342302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.005 [2024-07-24 23:17:24.342312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.005 [2024-07-24 23:17:24.349291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.006 [2024-07-24 23:17:24.349314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.006 [2024-07-24 23:17:24.349324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.006 [2024-07-24 23:17:24.355723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.006 [2024-07-24 23:17:24.355745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.006 [2024-07-24 23:17:24.355759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.006 [2024-07-24 23:17:24.362090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.006 [2024-07-24 23:17:24.362113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.006 [2024-07-24 23:17:24.362124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.006 [2024-07-24 23:17:24.368434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.006 [2024-07-24 23:17:24.368456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.006 [2024-07-24 23:17:24.368467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.006 [2024-07-24 23:17:24.374399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.006 [2024-07-24 23:17:24.374421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.006 [2024-07-24 23:17:24.374432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.006 [2024-07-24 23:17:24.379438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.006 [2024-07-24 23:17:24.379460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.006 [2024-07-24 23:17:24.379471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.006 [2024-07-24 23:17:24.384412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.006 [2024-07-24 23:17:24.384434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.006 [2024-07-24 23:17:24.384446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.006 [2024-07-24 23:17:24.389757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.006 [2024-07-24 23:17:24.389779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.006 [2024-07-24 23:17:24.389790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.006 [2024-07-24 23:17:24.394857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.006 [2024-07-24 23:17:24.394878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.006 [2024-07-24 23:17:24.394889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.006 [2024-07-24 23:17:24.400211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.006 [2024-07-24 23:17:24.400234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.006 [2024-07-24 23:17:24.400244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.006 [2024-07-24 23:17:24.406232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.006 [2024-07-24 23:17:24.406255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.006 [2024-07-24 23:17:24.406266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.006 [2024-07-24 23:17:24.412463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.006 [2024-07-24 23:17:24.412486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.006 [2024-07-24 23:17:24.412497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.006 [2024-07-24 23:17:24.418644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.006 [2024-07-24 23:17:24.418666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.006 [2024-07-24 23:17:24.418676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.006 [2024-07-24 23:17:24.424955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.006 [2024-07-24 23:17:24.424977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.006 [2024-07-24 23:17:24.424988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.006 [2024-07-24 23:17:24.431057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.006 [2024-07-24 23:17:24.431080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.006 [2024-07-24 23:17:24.431090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.264 [2024-07-24 23:17:24.437261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.264 [2024-07-24 23:17:24.437299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.264 [2024-07-24 23:17:24.437309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.264 [2024-07-24 23:17:24.443472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.264 [2024-07-24 23:17:24.443493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.264 [2024-07-24 23:17:24.443503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.264 [2024-07-24 23:17:24.449721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.264 [2024-07-24 23:17:24.449743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.264 [2024-07-24 23:17:24.449753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.264 [2024-07-24 23:17:24.456044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.264 [2024-07-24 23:17:24.456066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.264 [2024-07-24 23:17:24.456080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.264 [2024-07-24 23:17:24.462349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.264 [2024-07-24 23:17:24.462371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.264 [2024-07-24 23:17:24.462381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.264 [2024-07-24 23:17:24.468644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.264 [2024-07-24 23:17:24.468666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.264 [2024-07-24 23:17:24.468677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.264 [2024-07-24 23:17:24.474243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.264 [2024-07-24 23:17:24.474265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.264 [2024-07-24 23:17:24.474276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.264 [2024-07-24 23:17:24.480735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.264 [2024-07-24 23:17:24.480758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.264 [2024-07-24 23:17:24.480768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.264 [2024-07-24 23:17:24.486098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.264 [2024-07-24 23:17:24.486119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.264 [2024-07-24 23:17:24.486130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.264 [2024-07-24 23:17:24.491205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.264 [2024-07-24 23:17:24.491228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.264 [2024-07-24 23:17:24.491238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.264 [2024-07-24 23:17:24.497480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.264 [2024-07-24 23:17:24.497502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.264 [2024-07-24 23:17:24.497513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.264 [2024-07-24 23:17:24.504798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.264 [2024-07-24 23:17:24.504821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.264 [2024-07-24 23:17:24.504831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.264 [2024-07-24 23:17:24.513522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.264 [2024-07-24 23:17:24.513549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.264 [2024-07-24 23:17:24.513560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.264 [2024-07-24 23:17:24.522011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.264 [2024-07-24 23:17:24.522034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.264 [2024-07-24 23:17:24.522045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.264 [2024-07-24 23:17:24.530502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.264 [2024-07-24 23:17:24.530526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.264 [2024-07-24 23:17:24.530537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.264 [2024-07-24 23:17:24.539242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.264 [2024-07-24 23:17:24.539266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.264 [2024-07-24 23:17:24.539276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.264 [2024-07-24 23:17:24.548101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.264 [2024-07-24 23:17:24.548125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.264 [2024-07-24 23:17:24.548137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.264 [2024-07-24 23:17:24.557259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.264 [2024-07-24 23:17:24.557283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.264 [2024-07-24 23:17:24.557293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.264 [2024-07-24 23:17:24.566663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20308c0) 00:31:52.264 [2024-07-24 23:17:24.566687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.264 [2024-07-24 23:17:24.566698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.264 00:31:52.264 Latency(us) 00:31:52.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:52.264 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:52.264 nvme0n1 : 2.00 4389.75 548.72 0.00 0.00 3642.69 491.52 14260.63 00:31:52.264 =================================================================================================================== 00:31:52.264 Total : 4389.75 548.72 0.00 0.00 3642.69 491.52 14260.63 00:31:52.264 0 00:31:52.264 23:17:24 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:52.265 23:17:24 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:52.265 23:17:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:52.265 23:17:24 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:52.265 | .driver_specific 00:31:52.265 | .nvme_error 00:31:52.265 | .status_code 00:31:52.265 | .command_transient_transport_error' 00:31:52.523 23:17:24 -- host/digest.sh@71 -- # (( 283 > 0 )) 00:31:52.523 23:17:24 -- host/digest.sh@73 -- # killprocess 3400322 00:31:52.523 23:17:24 -- common/autotest_common.sh@926 -- # '[' -z 3400322 ']' 00:31:52.523 23:17:24 -- common/autotest_common.sh@930 -- # kill -0 3400322 00:31:52.523 23:17:24 -- common/autotest_common.sh@931 -- # uname 00:31:52.523 23:17:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:52.523 23:17:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3400322 00:31:52.523 23:17:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:52.523 23:17:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:52.523 23:17:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3400322' 00:31:52.523 killing process with pid 3400322 00:31:52.523 23:17:24 -- common/autotest_common.sh@945 -- # kill 3400322 00:31:52.523 Received shutdown signal, test time was about 2.000000 seconds 00:31:52.523 00:31:52.523 Latency(us) 00:31:52.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:52.523 =================================================================================================================== 00:31:52.523 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:52.523 23:17:24 -- common/autotest_common.sh@950 -- # wait 3400322 00:31:52.781 23:17:24 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:31:52.781 23:17:24 -- host/digest.sh@54 -- # local rw bs qd 00:31:52.781 23:17:24 -- host/digest.sh@56 -- # rw=randwrite 00:31:52.781 23:17:24 -- host/digest.sh@56 -- # bs=4096 00:31:52.781 23:17:24 -- host/digest.sh@56 -- # qd=128 00:31:52.781 23:17:24 -- host/digest.sh@58 -- # bperfpid=3401103 00:31:52.781 23:17:24 -- host/digest.sh@60 -- # waitforlisten 3401103 /var/tmp/bperf.sock 00:31:52.781 23:17:24 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:31:52.781 23:17:24 -- common/autotest_common.sh@819 -- # '[' -z 3401103 ']' 00:31:52.781 23:17:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:52.781 23:17:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:52.781 23:17:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:52.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:52.781 23:17:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:52.781 23:17:24 -- common/autotest_common.sh@10 -- # set +x 00:31:52.781 [2024-07-24 23:17:25.035667] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:52.781 [2024-07-24 23:17:25.035726] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3401103 ] 00:31:52.781 EAL: No free 2048 kB hugepages reported on node 1 00:31:52.781 [2024-07-24 23:17:25.105429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.781 [2024-07-24 23:17:25.137908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:53.715 23:17:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:53.715 23:17:25 -- common/autotest_common.sh@852 -- # return 0 00:31:53.715 23:17:25 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:53.715 23:17:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:53.715 23:17:25 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:53.715 23:17:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:53.715 23:17:25 -- common/autotest_common.sh@10 -- # set +x 00:31:53.715 23:17:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:53.715 23:17:26 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:53.715 23:17:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:53.973 nvme0n1 00:31:53.973 23:17:26 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:53.973 23:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:53.973 23:17:26 -- common/autotest_common.sh@10 -- # set +x 00:31:53.973 23:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:53.973 23:17:26 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:53.973 23:17:26 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:53.973 Running I/O for 2 seconds... 00:31:53.973 [2024-07-24 23:17:26.376485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fc560 00:31:53.973 [2024-07-24 23:17:26.377196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.973 [2024-07-24 23:17:26.377228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:53.973 [2024-07-24 23:17:26.385696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:53.973 [2024-07-24 23:17:26.385934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.973 [2024-07-24 23:17:26.385959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.973 [2024-07-24 23:17:26.394783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:53.973 [2024-07-24 23:17:26.395015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.973 [2024-07-24 23:17:26.395035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.232 [2024-07-24 23:17:26.403880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.232 [2024-07-24 23:17:26.404131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.232 [2024-07-24 23:17:26.404151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.232 [2024-07-24 23:17:26.412878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.232 [2024-07-24 23:17:26.413128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.232 [2024-07-24 23:17:26.413147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.232 [2024-07-24 23:17:26.421814] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.232 [2024-07-24 23:17:26.422065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.232 [2024-07-24 23:17:26.422086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.232 [2024-07-24 23:17:26.430729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.232 [2024-07-24 23:17:26.430996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.232 [2024-07-24 23:17:26.431016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.232 [2024-07-24 23:17:26.439602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.232 [2024-07-24 23:17:26.439874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.232 [2024-07-24 23:17:26.439895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.232 [2024-07-24 23:17:26.448502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.232 [2024-07-24 23:17:26.448746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.232 [2024-07-24 23:17:26.448766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.232 [2024-07-24 23:17:26.457324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.232 [2024-07-24 23:17:26.457565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.232 [2024-07-24 23:17:26.457584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.232 [2024-07-24 23:17:26.466201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.232 [2024-07-24 23:17:26.466432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.232 [2024-07-24 23:17:26.466452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.232 [2024-07-24 23:17:26.475227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.232 [2024-07-24 23:17:26.475478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.232 [2024-07-24 23:17:26.475498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.232 [2024-07-24 23:17:26.484077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.232 [2024-07-24 23:17:26.484323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.232 [2024-07-24 23:17:26.484343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.232 [2024-07-24 23:17:26.492944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.232 [2024-07-24 23:17:26.493178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.232 [2024-07-24 23:17:26.493198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.232 [2024-07-24 23:17:26.501767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.232 [2024-07-24 23:17:26.502035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.232 [2024-07-24 23:17:26.502055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.232 [2024-07-24 23:17:26.510577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.233 [2024-07-24 23:17:26.510850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.233 [2024-07-24 23:17:26.510873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.233 [2024-07-24 23:17:26.519431] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.233 [2024-07-24 23:17:26.519677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.233 [2024-07-24 23:17:26.519696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.233 [2024-07-24 23:17:26.528271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.233 [2024-07-24 23:17:26.528520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.233 [2024-07-24 23:17:26.528542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.233 [2024-07-24 23:17:26.537081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.233 [2024-07-24 23:17:26.537320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.233 [2024-07-24 23:17:26.537339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.233 [2024-07-24 23:17:26.545931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.233 [2024-07-24 23:17:26.546198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.233 [2024-07-24 23:17:26.546218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.233 [2024-07-24 23:17:26.554743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.233 [2024-07-24 23:17:26.555017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.233 [2024-07-24 23:17:26.555036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.233 [2024-07-24 23:17:26.563629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.233 [2024-07-24 23:17:26.563876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.233 [2024-07-24 23:17:26.563895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.233 [2024-07-24 23:17:26.572452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.233 [2024-07-24 23:17:26.572699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.233 [2024-07-24 23:17:26.572722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.233 [2024-07-24 23:17:26.581310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.233 [2024-07-24 23:17:26.581549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.233 [2024-07-24 23:17:26.581568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.233 [2024-07-24 23:17:26.590209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.233 [2024-07-24 23:17:26.590467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.233 [2024-07-24 23:17:26.590489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.233 [2024-07-24 23:17:26.599060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.233 [2024-07-24 23:17:26.599298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.233 [2024-07-24 23:17:26.599318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.233 [2024-07-24 23:17:26.607868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.233 [2024-07-24 23:17:26.608131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.233 [2024-07-24 23:17:26.608150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.233 [2024-07-24 23:17:26.616769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.233 [2024-07-24 23:17:26.617030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.233 [2024-07-24 23:17:26.617050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.233 [2024-07-24 23:17:26.625653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.233 [2024-07-24 23:17:26.625918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.233 [2024-07-24 23:17:26.625937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.233 [2024-07-24 23:17:26.634590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.233 [2024-07-24 23:17:26.634840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.233 [2024-07-24 23:17:26.634859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.233 [2024-07-24 23:17:26.643690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.233 [2024-07-24 23:17:26.643948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.233 [2024-07-24 23:17:26.643968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.233 [2024-07-24 23:17:26.652649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.233 [2024-07-24 23:17:26.652931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.233 [2024-07-24 23:17:26.652950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.661746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.661997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.492 [2024-07-24 23:17:26.662017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.670710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.670964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.492 [2024-07-24 23:17:26.670984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.679567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.679813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.492 [2024-07-24 23:17:26.679833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.688438] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.688692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.492 [2024-07-24 23:17:26.688711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.697319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.697575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.492 [2024-07-24 23:17:26.697594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.706471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.706710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.492 [2024-07-24 23:17:26.706734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.715297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.715536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.492 [2024-07-24 23:17:26.715555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.724171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.724412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.492 [2024-07-24 23:17:26.724432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.733098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.733359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.492 [2024-07-24 23:17:26.733378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.741980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.742226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.492 [2024-07-24 23:17:26.742246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.750815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.751073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.492 [2024-07-24 23:17:26.751093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.759678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.759925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.492 [2024-07-24 23:17:26.759945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.768533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.768790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.492 [2024-07-24 23:17:26.768809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.777428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.777687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.492 [2024-07-24 23:17:26.777706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.786257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.786498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.492 [2024-07-24 23:17:26.786517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.795065] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.795312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.492 [2024-07-24 23:17:26.795331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.803896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.804144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.492 [2024-07-24 23:17:26.804165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.812755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.813015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.492 [2024-07-24 23:17:26.813034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.821577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.821861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.492 [2024-07-24 23:17:26.821885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.830452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.830707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.492 [2024-07-24 23:17:26.830730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.839263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.839520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.492 [2024-07-24 23:17:26.839539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.492 [2024-07-24 23:17:26.848140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.492 [2024-07-24 23:17:26.848379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.493 [2024-07-24 23:17:26.848398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.493 [2024-07-24 23:17:26.856990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.493 [2024-07-24 23:17:26.857249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.493 [2024-07-24 23:17:26.857268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.493 [2024-07-24 23:17:26.865826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.493 [2024-07-24 23:17:26.866067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.493 [2024-07-24 23:17:26.866086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.493 [2024-07-24 23:17:26.874721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.493 [2024-07-24 23:17:26.874969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.493 [2024-07-24 23:17:26.874988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.493 [2024-07-24 23:17:26.883506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.493 [2024-07-24 23:17:26.883757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.493 [2024-07-24 23:17:26.883776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.493 [2024-07-24 23:17:26.892347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.493 [2024-07-24 23:17:26.892587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.493 [2024-07-24 23:17:26.892606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.493 [2024-07-24 23:17:26.901439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.493 [2024-07-24 23:17:26.901704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.493 [2024-07-24 23:17:26.901727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.493 [2024-07-24 23:17:26.910434] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.493 [2024-07-24 23:17:26.910693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.493 [2024-07-24 23:17:26.910713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.493 [2024-07-24 23:17:26.919532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.493 [2024-07-24 23:17:26.919799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.493 [2024-07-24 23:17:26.919819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.751 [2024-07-24 23:17:26.928532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.751 [2024-07-24 23:17:26.928799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.751 [2024-07-24 23:17:26.928818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.751 [2024-07-24 23:17:26.937400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.751 [2024-07-24 23:17:26.937663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.751 [2024-07-24 23:17:26.937682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.751 [2024-07-24 23:17:26.946336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.751 [2024-07-24 23:17:26.946577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.751 [2024-07-24 23:17:26.946596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.751 [2024-07-24 23:17:26.955153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.751 [2024-07-24 23:17:26.955409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.751 [2024-07-24 23:17:26.955428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.751 [2024-07-24 23:17:26.963972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.751 [2024-07-24 23:17:26.964231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.751 [2024-07-24 23:17:26.964250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.751 [2024-07-24 23:17:26.972873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.751 [2024-07-24 23:17:26.973141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.751 [2024-07-24 23:17:26.973161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.751 [2024-07-24 23:17:26.981720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:26.981993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:26.982013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:26.990591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:26.990833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:26.990852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:26.999422] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:26.999662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:26.999682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:27.008273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:27.008528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:27.008547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:27.017206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:27.017482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:27.017501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:27.026041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:27.026287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:27.026307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:27.034841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:27.035082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:27.035102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:27.043664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:27.043927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:27.043947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:27.052577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:27.052836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:27.052858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:27.061502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:27.061762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:27.061782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:27.070372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:27.070612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:27.070631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:27.079190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:27.079451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:27.079470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:27.088068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:27.088305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:27.088324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:27.096911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:27.097169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:27.097188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:27.105769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:27.106053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:27.106072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:27.114658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:27.114920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:27.114940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:27.123470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:27.123722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:27.123741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:27.132351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:27.132612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:27.132631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:27.141226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:27.141482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:27.141500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:27.150054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:27.150313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:27.150331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:27.159196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:27.159459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:27.159478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:27.168122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:27.168379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:27.168398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.752 [2024-07-24 23:17:27.177123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:54.752 [2024-07-24 23:17:27.177379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.752 [2024-07-24 23:17:27.177399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.011 [2024-07-24 23:17:27.186242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.011 [2024-07-24 23:17:27.186483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.011 [2024-07-24 23:17:27.186502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.011 [2024-07-24 23:17:27.195100] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.011 [2024-07-24 23:17:27.195341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.011 [2024-07-24 23:17:27.195360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.011 [2024-07-24 23:17:27.203976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.011 [2024-07-24 23:17:27.204252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.011 [2024-07-24 23:17:27.204272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.011 [2024-07-24 23:17:27.212869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.011 [2024-07-24 23:17:27.213114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.011 [2024-07-24 23:17:27.213133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.011 [2024-07-24 23:17:27.221663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.011 [2024-07-24 23:17:27.221910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.011 [2024-07-24 23:17:27.221929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.011 [2024-07-24 23:17:27.230511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.230758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.230777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.239362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.239604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.239623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.248324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.248585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.248605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.257254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.257510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.257529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.266048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.266305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.266323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.274925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.275166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.275184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.283831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.284120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.284143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.292668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.292932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.292952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.301562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.301805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.301824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.310384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.310625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.310644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.319282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.319540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.319559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.328195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.328453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.328473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.337052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.337291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.337311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.345897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.346153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.346172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.354730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.354979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.354999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.363584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.363847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.363866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.372503] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.372768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.372787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.381349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.381590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.381609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.390274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.390529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.390548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.399246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.399487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.399507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.408122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.408379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.408398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.417248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.417514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.417533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.426239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.426526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.426546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.012 [2024-07-24 23:17:27.435201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.012 [2024-07-24 23:17:27.435447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.012 [2024-07-24 23:17:27.435466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.271 [2024-07-24 23:17:27.444309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.271 [2024-07-24 23:17:27.444554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.271 [2024-07-24 23:17:27.444573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.271 [2024-07-24 23:17:27.453197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.271 [2024-07-24 23:17:27.453464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.271 [2024-07-24 23:17:27.453483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.271 [2024-07-24 23:17:27.462006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.271 [2024-07-24 23:17:27.462279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.271 [2024-07-24 23:17:27.462298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.271 [2024-07-24 23:17:27.470913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.271 [2024-07-24 23:17:27.471175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.271 [2024-07-24 23:17:27.471194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.271 [2024-07-24 23:17:27.479889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.271 [2024-07-24 23:17:27.480168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.271 [2024-07-24 23:17:27.480187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.271 [2024-07-24 23:17:27.488774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.271 [2024-07-24 23:17:27.489029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.271 [2024-07-24 23:17:27.489048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.271 [2024-07-24 23:17:27.497612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.271 [2024-07-24 23:17:27.497860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.271 [2024-07-24 23:17:27.497879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.271 [2024-07-24 23:17:27.506423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.271 [2024-07-24 23:17:27.506663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.271 [2024-07-24 23:17:27.506682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.271 [2024-07-24 23:17:27.515356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.271 [2024-07-24 23:17:27.515614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.271 [2024-07-24 23:17:27.515636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.271 [2024-07-24 23:17:27.524234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.271 [2024-07-24 23:17:27.524481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.271 [2024-07-24 23:17:27.524500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.271 [2024-07-24 23:17:27.533040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.271 [2024-07-24 23:17:27.533288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.271 [2024-07-24 23:17:27.533307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.271 [2024-07-24 23:17:27.541929] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.271 [2024-07-24 23:17:27.542179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.271 [2024-07-24 23:17:27.542198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.271 [2024-07-24 23:17:27.550806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.271 [2024-07-24 23:17:27.551055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.271 [2024-07-24 23:17:27.551074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.271 [2024-07-24 23:17:27.559694] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.271 [2024-07-24 23:17:27.559964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.271 [2024-07-24 23:17:27.559983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.271 [2024-07-24 23:17:27.568592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.271 [2024-07-24 23:17:27.568842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.271 [2024-07-24 23:17:27.568862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.271 [2024-07-24 23:17:27.577427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.271 [2024-07-24 23:17:27.577681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.272 [2024-07-24 23:17:27.577700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.272 [2024-07-24 23:17:27.586329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.272 [2024-07-24 23:17:27.586572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.272 [2024-07-24 23:17:27.586592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.272 [2024-07-24 23:17:27.595446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.272 [2024-07-24 23:17:27.595692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.272 [2024-07-24 23:17:27.595711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.272 [2024-07-24 23:17:27.604403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.272 [2024-07-24 23:17:27.604668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.272 [2024-07-24 23:17:27.604688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.272 [2024-07-24 23:17:27.613284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.272 [2024-07-24 23:17:27.613535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.272 [2024-07-24 23:17:27.613554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.272 [2024-07-24 23:17:27.622078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.272 [2024-07-24 23:17:27.622332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.272 [2024-07-24 23:17:27.622351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.272 [2024-07-24 23:17:27.630909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.272 [2024-07-24 23:17:27.631160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.272 [2024-07-24 23:17:27.631179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.272 [2024-07-24 23:17:27.639783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.272 [2024-07-24 23:17:27.640050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.272 [2024-07-24 23:17:27.640069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.272 [2024-07-24 23:17:27.648645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.272 [2024-07-24 23:17:27.648900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.272 [2024-07-24 23:17:27.648920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.272 [2024-07-24 23:17:27.657535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.272 [2024-07-24 23:17:27.657802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.272 [2024-07-24 23:17:27.657833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.272 [2024-07-24 23:17:27.666370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.272 [2024-07-24 23:17:27.666618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.272 [2024-07-24 23:17:27.666636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.272 [2024-07-24 23:17:27.675464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.272 [2024-07-24 23:17:27.675718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.272 [2024-07-24 23:17:27.675738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.272 [2024-07-24 23:17:27.684443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.272 [2024-07-24 23:17:27.684711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.272 [2024-07-24 23:17:27.684734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.272 [2024-07-24 23:17:27.693411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.272 [2024-07-24 23:17:27.693666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.272 [2024-07-24 23:17:27.693686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.530 [2024-07-24 23:17:27.702533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.530 [2024-07-24 23:17:27.702802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.530 [2024-07-24 23:17:27.702833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.530 [2024-07-24 23:17:27.711730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.530 [2024-07-24 23:17:27.711982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.530 [2024-07-24 23:17:27.712002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.530 [2024-07-24 23:17:27.720594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.530 [2024-07-24 23:17:27.720861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.530 [2024-07-24 23:17:27.720880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.530 [2024-07-24 23:17:27.729499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.530 [2024-07-24 23:17:27.729771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.530 [2024-07-24 23:17:27.729791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.530 [2024-07-24 23:17:27.738340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.530 [2024-07-24 23:17:27.738586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.530 [2024-07-24 23:17:27.738605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.530 [2024-07-24 23:17:27.747144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.530 [2024-07-24 23:17:27.747405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.530 [2024-07-24 23:17:27.747427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.530 [2024-07-24 23:17:27.756071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.530 [2024-07-24 23:17:27.756333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.530 [2024-07-24 23:17:27.756352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.530 [2024-07-24 23:17:27.764943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.530 [2024-07-24 23:17:27.765194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.765213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.531 [2024-07-24 23:17:27.773823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.531 [2024-07-24 23:17:27.774079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.774098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.531 [2024-07-24 23:17:27.782704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.531 [2024-07-24 23:17:27.782976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.782996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.531 [2024-07-24 23:17:27.791544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.531 [2024-07-24 23:17:27.791821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.791841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.531 [2024-07-24 23:17:27.800390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.531 [2024-07-24 23:17:27.800637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.800657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.531 [2024-07-24 23:17:27.809188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.531 [2024-07-24 23:17:27.809435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.809454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.531 [2024-07-24 23:17:27.818029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.531 [2024-07-24 23:17:27.818274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.818293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.531 [2024-07-24 23:17:27.826905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.531 [2024-07-24 23:17:27.827172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.827192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.531 [2024-07-24 23:17:27.835783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.531 [2024-07-24 23:17:27.836033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.836052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.531 [2024-07-24 23:17:27.844599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.531 [2024-07-24 23:17:27.844840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.844859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.531 [2024-07-24 23:17:27.853430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.531 [2024-07-24 23:17:27.853678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.853697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.531 [2024-07-24 23:17:27.862259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.531 [2024-07-24 23:17:27.862506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.862525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.531 [2024-07-24 23:17:27.871128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.531 [2024-07-24 23:17:27.871393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.871414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.531 [2024-07-24 23:17:27.880014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.531 [2024-07-24 23:17:27.880266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.880286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.531 [2024-07-24 23:17:27.888791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.531 [2024-07-24 23:17:27.889039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.889058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.531 [2024-07-24 23:17:27.897815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.531 [2024-07-24 23:17:27.898063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.898083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.531 [2024-07-24 23:17:27.906588] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.531 [2024-07-24 23:17:27.906841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.906860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.531 [2024-07-24 23:17:27.915414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.531 [2024-07-24 23:17:27.915662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.915681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.531 [2024-07-24 23:17:27.924310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.531 [2024-07-24 23:17:27.924574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.924594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.531 [2024-07-24 23:17:27.933309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.531 [2024-07-24 23:17:27.933560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.933580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.531 [2024-07-24 23:17:27.942278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.531 [2024-07-24 23:17:27.942531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.942551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.531 [2024-07-24 23:17:27.951277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.531 [2024-07-24 23:17:27.951543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.531 [2024-07-24 23:17:27.951563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.789 [2024-07-24 23:17:27.960332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.789 [2024-07-24 23:17:27.960588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.789 [2024-07-24 23:17:27.960607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.789 [2024-07-24 23:17:27.969314] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.789 [2024-07-24 23:17:27.969561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.789 [2024-07-24 23:17:27.969580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.789 [2024-07-24 23:17:27.978186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.789 [2024-07-24 23:17:27.978453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.789 [2024-07-24 23:17:27.978476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.789 [2024-07-24 23:17:27.987057] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.789 [2024-07-24 23:17:27.987305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.789 [2024-07-24 23:17:27.987324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.789 [2024-07-24 23:17:27.995915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.789 [2024-07-24 23:17:27.996166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:27.996185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.004686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.004942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.004962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.013583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.013831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.013850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.022490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.022757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.022776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.031330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.031599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.031618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.040193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.040442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.040462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.048995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.049243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.049262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.057820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.058069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.058089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.066687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.066959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.066979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.075531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.075799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.075819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.084372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.084622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.084641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.093250] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.093516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.093537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.102079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.102327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.102347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.110924] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.111192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.111211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.119782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.120031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.120050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.128572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.128824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.128843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.137395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.137643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.137662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.146235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.146486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.146505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.155131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.155394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.155413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.164010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.164251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.164270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.172844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.173081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.173100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.181703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.181952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.181971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.190783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.191028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.191048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.199799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.200079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.200098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.208828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.209078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.209100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.790 [2024-07-24 23:17:28.217801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:55.790 [2024-07-24 23:17:28.218054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.790 [2024-07-24 23:17:28.218073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.048 [2024-07-24 23:17:28.226798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:56.048 [2024-07-24 23:17:28.227064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.048 [2024-07-24 23:17:28.227083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.048 [2024-07-24 23:17:28.235670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:56.048 [2024-07-24 23:17:28.235939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.048 [2024-07-24 23:17:28.235959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.048 [2024-07-24 23:17:28.244553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:56.048 [2024-07-24 23:17:28.244804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.048 [2024-07-24 23:17:28.244824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.048 [2024-07-24 23:17:28.253415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:56.048 [2024-07-24 23:17:28.253682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.048 [2024-07-24 23:17:28.253701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.048 [2024-07-24 23:17:28.262287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:56.048 [2024-07-24 23:17:28.262533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.048 [2024-07-24 23:17:28.262552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.048 [2024-07-24 23:17:28.271149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:56.048 [2024-07-24 23:17:28.271412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.048 [2024-07-24 23:17:28.271431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.048 [2024-07-24 23:17:28.280031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:56.048 [2024-07-24 23:17:28.280277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.048 [2024-07-24 23:17:28.280296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.048 [2024-07-24 23:17:28.288817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:56.048 [2024-07-24 23:17:28.289072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.048 [2024-07-24 23:17:28.289091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.048 [2024-07-24 23:17:28.297667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:56.048 [2024-07-24 23:17:28.297921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.048 [2024-07-24 23:17:28.297940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.048 [2024-07-24 23:17:28.306535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:56.048 [2024-07-24 23:17:28.306802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.048 [2024-07-24 23:17:28.306822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.048 [2024-07-24 23:17:28.315389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:56.048 [2024-07-24 23:17:28.315646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.048 [2024-07-24 23:17:28.315666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.048 [2024-07-24 23:17:28.324267] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:56.048 [2024-07-24 23:17:28.324504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.048 [2024-07-24 23:17:28.324523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.048 [2024-07-24 23:17:28.333069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:56.048 [2024-07-24 23:17:28.333314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.048 [2024-07-24 23:17:28.333332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.048 [2024-07-24 23:17:28.341839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:56.048 [2024-07-24 23:17:28.342092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.048 [2024-07-24 23:17:28.342111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.048 [2024-07-24 23:17:28.350722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:56.048 [2024-07-24 23:17:28.350987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.048 [2024-07-24 23:17:28.351007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.048 [2024-07-24 23:17:28.359564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:56.048 [2024-07-24 23:17:28.359845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.048 [2024-07-24 23:17:28.359864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.048 [2024-07-24 23:17:28.368391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d9c280) with pdu=0x2000190fb8b8 00:31:56.048 [2024-07-24 23:17:28.368616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.048 [2024-07-24 23:17:28.368635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.048 00:31:56.048 Latency(us) 00:31:56.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:56.048 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:56.048 nvme0n1 : 2.00 28685.68 112.05 0.00 0.00 4455.19 2031.62 9489.61 00:31:56.048 =================================================================================================================== 00:31:56.048 Total : 28685.68 112.05 0.00 0.00 4455.19 2031.62 9489.61 00:31:56.048 0 00:31:56.048 23:17:28 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:56.048 23:17:28 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:56.048 23:17:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:56.048 23:17:28 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:56.048 | .driver_specific 00:31:56.049 | .nvme_error 00:31:56.049 | .status_code 00:31:56.049 | .command_transient_transport_error' 00:31:56.306 23:17:28 -- host/digest.sh@71 -- # (( 225 > 0 )) 00:31:56.306 23:17:28 -- host/digest.sh@73 -- # killprocess 3401103 00:31:56.306 23:17:28 -- common/autotest_common.sh@926 -- # '[' -z 3401103 ']' 00:31:56.306 23:17:28 -- common/autotest_common.sh@930 -- # kill -0 3401103 00:31:56.306 23:17:28 -- common/autotest_common.sh@931 -- # uname 00:31:56.306 23:17:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:56.306 23:17:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3401103 00:31:56.306 23:17:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:56.306 23:17:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:56.306 23:17:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3401103' 00:31:56.306 killing process with pid 3401103 00:31:56.306 23:17:28 -- common/autotest_common.sh@945 -- # kill 3401103 00:31:56.307 Received shutdown signal, test time was about 2.000000 seconds 00:31:56.307 00:31:56.307 Latency(us) 00:31:56.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:56.307 =================================================================================================================== 00:31:56.307 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:56.307 23:17:28 -- common/autotest_common.sh@950 -- # wait 3401103 00:31:56.565 23:17:28 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:31:56.565 23:17:28 -- host/digest.sh@54 -- # local rw bs qd 00:31:56.565 23:17:28 -- host/digest.sh@56 -- # rw=randwrite 00:31:56.565 23:17:28 -- host/digest.sh@56 -- # bs=131072 00:31:56.565 23:17:28 -- host/digest.sh@56 -- # qd=16 00:31:56.565 23:17:28 -- host/digest.sh@58 -- # bperfpid=3401662 00:31:56.565 23:17:28 -- host/digest.sh@60 -- # waitforlisten 3401662 /var/tmp/bperf.sock 00:31:56.565 23:17:28 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:31:56.565 23:17:28 -- common/autotest_common.sh@819 -- # '[' -z 3401662 ']' 00:31:56.565 23:17:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:56.565 23:17:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:56.565 23:17:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:56.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:56.565 23:17:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:56.565 23:17:28 -- common/autotest_common.sh@10 -- # set +x 00:31:56.565 [2024-07-24 23:17:28.839395] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:56.565 [2024-07-24 23:17:28.839447] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3401662 ] 00:31:56.565 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:56.565 Zero copy mechanism will not be used. 00:31:56.565 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.565 [2024-07-24 23:17:28.909784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.565 [2024-07-24 23:17:28.946662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:57.498 23:17:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:57.498 23:17:29 -- common/autotest_common.sh@852 -- # return 0 00:31:57.498 23:17:29 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:57.498 23:17:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:57.498 23:17:29 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:57.498 23:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:57.498 23:17:29 -- common/autotest_common.sh@10 -- # set +x 00:31:57.498 23:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:57.498 23:17:29 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:57.498 23:17:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:57.756 nvme0n1 00:31:58.014 23:17:30 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:58.014 23:17:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:58.014 23:17:30 -- common/autotest_common.sh@10 -- # set +x 00:31:58.014 23:17:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:58.014 23:17:30 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:58.014 23:17:30 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:58.014 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:58.014 Zero copy mechanism will not be used. 00:31:58.014 Running I/O for 2 seconds... 00:31:58.014 [2024-07-24 23:17:30.308280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.014 [2024-07-24 23:17:30.308657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.014 [2024-07-24 23:17:30.308684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.014 [2024-07-24 23:17:30.320500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.014 [2024-07-24 23:17:30.320748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.014 [2024-07-24 23:17:30.320771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.014 [2024-07-24 23:17:30.328053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.014 [2024-07-24 23:17:30.328168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.014 [2024-07-24 23:17:30.328190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.014 [2024-07-24 23:17:30.334780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.014 [2024-07-24 23:17:30.334888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.014 [2024-07-24 23:17:30.334912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.014 [2024-07-24 23:17:30.340704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.014 [2024-07-24 23:17:30.340838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.014 [2024-07-24 23:17:30.340858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.014 [2024-07-24 23:17:30.346724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.014 [2024-07-24 23:17:30.346799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.014 [2024-07-24 23:17:30.346830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.014 [2024-07-24 23:17:30.352253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.014 [2024-07-24 23:17:30.352409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.014 [2024-07-24 23:17:30.352428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.014 [2024-07-24 23:17:30.357941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.014 [2024-07-24 23:17:30.358230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.014 [2024-07-24 23:17:30.358250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.014 [2024-07-24 23:17:30.363062] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.014 [2024-07-24 23:17:30.363344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.014 [2024-07-24 23:17:30.363364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.014 [2024-07-24 23:17:30.368699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.014 [2024-07-24 23:17:30.368832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.014 [2024-07-24 23:17:30.368852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.014 [2024-07-24 23:17:30.374129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.014 [2024-07-24 23:17:30.374275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.014 [2024-07-24 23:17:30.374294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.014 [2024-07-24 23:17:30.380048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.014 [2024-07-24 23:17:30.380189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.014 [2024-07-24 23:17:30.380207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.014 [2024-07-24 23:17:30.385617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.014 [2024-07-24 23:17:30.385800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.014 [2024-07-24 23:17:30.385819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.014 [2024-07-24 23:17:30.390992] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.014 [2024-07-24 23:17:30.391140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.014 [2024-07-24 23:17:30.391159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.014 [2024-07-24 23:17:30.396838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.014 [2024-07-24 23:17:30.397102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.014 [2024-07-24 23:17:30.397122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.014 [2024-07-24 23:17:30.402224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.015 [2024-07-24 23:17:30.402564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.015 [2024-07-24 23:17:30.402583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.015 [2024-07-24 23:17:30.408027] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.015 [2024-07-24 23:17:30.408321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.015 [2024-07-24 23:17:30.408340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.015 [2024-07-24 23:17:30.414033] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.015 [2024-07-24 23:17:30.414216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.015 [2024-07-24 23:17:30.414235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.015 [2024-07-24 23:17:30.420761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.015 [2024-07-24 23:17:30.420960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.015 [2024-07-24 23:17:30.420981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.015 [2024-07-24 23:17:30.427535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.015 [2024-07-24 23:17:30.427678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.015 [2024-07-24 23:17:30.427697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.015 [2024-07-24 23:17:30.434370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.015 [2024-07-24 23:17:30.434558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.015 [2024-07-24 23:17:30.434576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.015 [2024-07-24 23:17:30.442387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.015 [2024-07-24 23:17:30.442549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.015 [2024-07-24 23:17:30.442568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.273 [2024-07-24 23:17:30.450461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.273 [2024-07-24 23:17:30.450614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.273 [2024-07-24 23:17:30.450633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.273 [2024-07-24 23:17:30.458223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.273 [2024-07-24 23:17:30.458516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.273 [2024-07-24 23:17:30.458535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.273 [2024-07-24 23:17:30.466061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.273 [2024-07-24 23:17:30.466334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.273 [2024-07-24 23:17:30.466353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.273 [2024-07-24 23:17:30.474061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.273 [2024-07-24 23:17:30.474287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.273 [2024-07-24 23:17:30.474307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.273 [2024-07-24 23:17:30.481982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.273 [2024-07-24 23:17:30.482231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.273 [2024-07-24 23:17:30.482252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.273 [2024-07-24 23:17:30.490456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.273 [2024-07-24 23:17:30.490686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.273 [2024-07-24 23:17:30.490706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.273 [2024-07-24 23:17:30.498295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.273 [2024-07-24 23:17:30.498439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.273 [2024-07-24 23:17:30.498459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.273 [2024-07-24 23:17:30.506564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.273 [2024-07-24 23:17:30.506830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.273 [2024-07-24 23:17:30.506853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.273 [2024-07-24 23:17:30.514565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.273 [2024-07-24 23:17:30.514762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.273 [2024-07-24 23:17:30.514781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.273 [2024-07-24 23:17:30.522995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.273 [2024-07-24 23:17:30.523320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.273 [2024-07-24 23:17:30.523340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.273 [2024-07-24 23:17:30.529792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.530000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.530020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.537462] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.537632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.537650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.545561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.545729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.545764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.552767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.552937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.552955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.560059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.560253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.560272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.567895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.568188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.568208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.575462] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.575675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.575696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.582479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.582810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.582829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.590166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.590412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.590432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.597609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.597852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.597872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.605384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.605566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.605584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.613281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.613454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.613472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.621123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.621322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.621341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.628783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.628993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.629013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.636675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.636834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.636853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.644577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.644928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.644949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.652522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.652660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.652678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.660895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.661115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.661135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.669064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.669224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.669243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.677324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.677470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.677489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.685478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.685587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.685605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.693363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.693556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.693574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.274 [2024-07-24 23:17:30.701585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.274 [2024-07-24 23:17:30.701753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.274 [2024-07-24 23:17:30.701772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.533 [2024-07-24 23:17:30.710043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.533 [2024-07-24 23:17:30.710358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.533 [2024-07-24 23:17:30.710391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.533 [2024-07-24 23:17:30.717932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.533 [2024-07-24 23:17:30.718201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.533 [2024-07-24 23:17:30.718220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.533 [2024-07-24 23:17:30.726205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.533 [2024-07-24 23:17:30.726402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.533 [2024-07-24 23:17:30.726420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.533 [2024-07-24 23:17:30.733137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.533 [2024-07-24 23:17:30.733241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.533 [2024-07-24 23:17:30.733259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.533 [2024-07-24 23:17:30.738341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.533 [2024-07-24 23:17:30.738457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.533 [2024-07-24 23:17:30.738476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.533 [2024-07-24 23:17:30.743766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.533 [2024-07-24 23:17:30.743903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.533 [2024-07-24 23:17:30.743922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.533 [2024-07-24 23:17:30.749154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.533 [2024-07-24 23:17:30.749333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.533 [2024-07-24 23:17:30.749352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.533 [2024-07-24 23:17:30.754667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.533 [2024-07-24 23:17:30.754800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.533 [2024-07-24 23:17:30.754819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.533 [2024-07-24 23:17:30.760528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.533 [2024-07-24 23:17:30.760745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.533 [2024-07-24 23:17:30.760764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.533 [2024-07-24 23:17:30.765655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.533 [2024-07-24 23:17:30.765812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.533 [2024-07-24 23:17:30.765831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.533 [2024-07-24 23:17:30.771148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.533 [2024-07-24 23:17:30.771356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.533 [2024-07-24 23:17:30.771375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.533 [2024-07-24 23:17:30.776728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.533 [2024-07-24 23:17:30.776839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.533 [2024-07-24 23:17:30.776858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.533 [2024-07-24 23:17:30.781962] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.533 [2024-07-24 23:17:30.782104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.533 [2024-07-24 23:17:30.782123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.533 [2024-07-24 23:17:30.787328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.533 [2024-07-24 23:17:30.787434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.533 [2024-07-24 23:17:30.787453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.533 [2024-07-24 23:17:30.793571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.533 [2024-07-24 23:17:30.793755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.533 [2024-07-24 23:17:30.793774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.533 [2024-07-24 23:17:30.799402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.799525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.799545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.804709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.805045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.805066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.809946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.810127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.810146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.815175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.815293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.815312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.820371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.820524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.820544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.826111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.826190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.826210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.830911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.831009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.831028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.836936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.837214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.837234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.842686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.842854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.842872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.848462] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.848649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.848669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.854605] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.854700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.854725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.860943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.861069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.861091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.866383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.866523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.866543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.871740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.871840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.871860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.876658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.876791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.876810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.882160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.882359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.882378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.887235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.887340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.887359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.892572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.892688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.892707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.898079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.898256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.898276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.903386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.903596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.903616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.909574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.909727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.909746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.915574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.915725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.915744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.920800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.920907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.920925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.926828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.926981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.927001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.932412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.932583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.534 [2024-07-24 23:17:30.932602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.534 [2024-07-24 23:17:30.938642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.534 [2024-07-24 23:17:30.938840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.535 [2024-07-24 23:17:30.938859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.535 [2024-07-24 23:17:30.944465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.535 [2024-07-24 23:17:30.944631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.535 [2024-07-24 23:17:30.944650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.535 [2024-07-24 23:17:30.950016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.535 [2024-07-24 23:17:30.950135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.535 [2024-07-24 23:17:30.950154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.535 [2024-07-24 23:17:30.955246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.535 [2024-07-24 23:17:30.955391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.535 [2024-07-24 23:17:30.955410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.535 [2024-07-24 23:17:30.960062] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.535 [2024-07-24 23:17:30.960189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.535 [2024-07-24 23:17:30.960208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.793 [2024-07-24 23:17:30.965287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.793 [2024-07-24 23:17:30.965442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.793 [2024-07-24 23:17:30.965461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.793 [2024-07-24 23:17:30.971132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.793 [2024-07-24 23:17:30.971434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.793 [2024-07-24 23:17:30.971454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.793 [2024-07-24 23:17:30.977001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.793 [2024-07-24 23:17:30.977198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.793 [2024-07-24 23:17:30.977217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.793 [2024-07-24 23:17:30.983223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.793 [2024-07-24 23:17:30.983490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.793 [2024-07-24 23:17:30.983511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.793 [2024-07-24 23:17:30.988866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.793 [2024-07-24 23:17:30.989055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.793 [2024-07-24 23:17:30.989074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.793 [2024-07-24 23:17:30.994523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.793 [2024-07-24 23:17:30.994722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.793 [2024-07-24 23:17:30.994740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.793 [2024-07-24 23:17:31.000171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.793 [2024-07-24 23:17:31.000305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.793 [2024-07-24 23:17:31.000323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.793 [2024-07-24 23:17:31.005082] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.793 [2024-07-24 23:17:31.005250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.005272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.010569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.010681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.010701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.016354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.016641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.016661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.022028] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.022276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.022296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.027908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.028115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.028135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.032945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.033032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.033051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.038825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.038960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.038981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.044294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.044431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.044449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.050317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.050423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.050442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.056681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.056772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.056791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.062640] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.062794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.062814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.069227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.069411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.069430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.076161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.076337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.076356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.083941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.084107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.084125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.091765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.091999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.092019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.099170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.099313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.099332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.107252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.107445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.107463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.115063] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.115266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.115293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.123081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.123244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.123263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.131159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.131417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.131437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.139036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.139258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.139278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.146255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.146451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.146471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.154105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.154312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.154331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.162068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.162291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.162311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.170316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.170500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.170519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.178865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.179083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.179103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.187093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.794 [2024-07-24 23:17:31.187361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.794 [2024-07-24 23:17:31.187384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.794 [2024-07-24 23:17:31.195148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.795 [2024-07-24 23:17:31.195369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.795 [2024-07-24 23:17:31.195389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.795 [2024-07-24 23:17:31.203082] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.795 [2024-07-24 23:17:31.203309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.795 [2024-07-24 23:17:31.203330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.795 [2024-07-24 23:17:31.211482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.795 [2024-07-24 23:17:31.211700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.795 [2024-07-24 23:17:31.211727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.795 [2024-07-24 23:17:31.219843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:58.795 [2024-07-24 23:17:31.220018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.795 [2024-07-24 23:17:31.220038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.053 [2024-07-24 23:17:31.227637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.053 [2024-07-24 23:17:31.227827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.053 [2024-07-24 23:17:31.227847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.053 [2024-07-24 23:17:31.235284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.053 [2024-07-24 23:17:31.235429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.053 [2024-07-24 23:17:31.235448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.053 [2024-07-24 23:17:31.243129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.053 [2024-07-24 23:17:31.243371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.053 [2024-07-24 23:17:31.243391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.053 [2024-07-24 23:17:31.250777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.053 [2024-07-24 23:17:31.251042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.053 [2024-07-24 23:17:31.251061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.053 [2024-07-24 23:17:31.259089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.053 [2024-07-24 23:17:31.259452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.053 [2024-07-24 23:17:31.259473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.053 [2024-07-24 23:17:31.266821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.053 [2024-07-24 23:17:31.266980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.053 [2024-07-24 23:17:31.267000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.053 [2024-07-24 23:17:31.274974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.053 [2024-07-24 23:17:31.275149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.053 [2024-07-24 23:17:31.275169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.053 [2024-07-24 23:17:31.283151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.053 [2024-07-24 23:17:31.283281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.053 [2024-07-24 23:17:31.283301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.291116] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.291375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.291395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.299083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.299232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.299250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.306829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.306997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.307016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.315065] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.315198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.315217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.323143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.323417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.323437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.331376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.331523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.331543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.338779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.338909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.338928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.344337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.344486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.344505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.349933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.350075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.350094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.355530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.355634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.355653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.360981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.361180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.361199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.365892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.366047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.366066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.371813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.372050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.372070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.377312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.377533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.377556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.383380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.383535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.383554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.389485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.389610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.389629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.395047] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.395185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.395204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.400174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.400285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.400304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.405506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.405667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.405685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.410675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.410806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.410825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.416575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.416885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.416905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.421736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.421976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.421995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.427562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.427736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.427755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.433189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.433340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.433358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.439417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.439559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.439578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.445869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.445952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.445971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.054 [2024-07-24 23:17:31.452080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.054 [2024-07-24 23:17:31.452250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.054 [2024-07-24 23:17:31.452269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.055 [2024-07-24 23:17:31.457595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.055 [2024-07-24 23:17:31.457727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.055 [2024-07-24 23:17:31.457746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.055 [2024-07-24 23:17:31.462859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.055 [2024-07-24 23:17:31.463093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.055 [2024-07-24 23:17:31.463112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.055 [2024-07-24 23:17:31.468223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.055 [2024-07-24 23:17:31.468382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.055 [2024-07-24 23:17:31.468401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.055 [2024-07-24 23:17:31.473350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.055 [2024-07-24 23:17:31.473634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.055 [2024-07-24 23:17:31.473654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.055 [2024-07-24 23:17:31.479453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.055 [2024-07-24 23:17:31.479559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.055 [2024-07-24 23:17:31.479578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.317 [2024-07-24 23:17:31.485390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.317 [2024-07-24 23:17:31.485463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.317 [2024-07-24 23:17:31.485482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.317 [2024-07-24 23:17:31.491673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.317 [2024-07-24 23:17:31.491819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.317 [2024-07-24 23:17:31.491838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.317 [2024-07-24 23:17:31.497852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.317 [2024-07-24 23:17:31.497960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.317 [2024-07-24 23:17:31.497979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.317 [2024-07-24 23:17:31.504272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.317 [2024-07-24 23:17:31.504377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.317 [2024-07-24 23:17:31.504396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.317 [2024-07-24 23:17:31.509988] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.510142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.510161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.515200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.515319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.515338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.520337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.520525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.520545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.525256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.525397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.525419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.530367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.530511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.530530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.535420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.535528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.535547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.541197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.541340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.541359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.546247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.546371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.546390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.552084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.552274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.552293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.558441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.558585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.558604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.565980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.566148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.566167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.572114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.572197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.572216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.578566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.578741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.578760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.583808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.583908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.583927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.590113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.590252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.590271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.595658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.596084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.596104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.601665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.601919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.601939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.606343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.606526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.606545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.610948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.611161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.611180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.616730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.616868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.616887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.622157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.622332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.622351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.628567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.628741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.628760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.634322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.634534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.634554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.640264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.640456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.640475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.645694] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.645907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.645927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.651019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.651232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.651252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.655874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.318 [2024-07-24 23:17:31.656060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.318 [2024-07-24 23:17:31.656079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.318 [2024-07-24 23:17:31.661200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.319 [2024-07-24 23:17:31.661363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.319 [2024-07-24 23:17:31.661381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.319 [2024-07-24 23:17:31.667535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.319 [2024-07-24 23:17:31.667682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.319 [2024-07-24 23:17:31.667701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.319 [2024-07-24 23:17:31.673005] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.319 [2024-07-24 23:17:31.673109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.319 [2024-07-24 23:17:31.673131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.319 [2024-07-24 23:17:31.677905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.319 [2024-07-24 23:17:31.678011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.319 [2024-07-24 23:17:31.678030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.319 [2024-07-24 23:17:31.682789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.319 [2024-07-24 23:17:31.682889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.319 [2024-07-24 23:17:31.682908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.319 [2024-07-24 23:17:31.687599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.319 [2024-07-24 23:17:31.687771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.319 [2024-07-24 23:17:31.687790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.319 [2024-07-24 23:17:31.693572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.319 [2024-07-24 23:17:31.693797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.319 [2024-07-24 23:17:31.693817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.319 [2024-07-24 23:17:31.699209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.319 [2024-07-24 23:17:31.699366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.319 [2024-07-24 23:17:31.699384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.319 [2024-07-24 23:17:31.705084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.319 [2024-07-24 23:17:31.705162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.319 [2024-07-24 23:17:31.705181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.319 [2024-07-24 23:17:31.710950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.319 [2024-07-24 23:17:31.711118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.319 [2024-07-24 23:17:31.711137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.319 [2024-07-24 23:17:31.717679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.319 [2024-07-24 23:17:31.717941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.319 [2024-07-24 23:17:31.717961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.319 [2024-07-24 23:17:31.728137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.319 [2024-07-24 23:17:31.728628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.319 [2024-07-24 23:17:31.728647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.319 [2024-07-24 23:17:31.741031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.319 [2024-07-24 23:17:31.741309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.319 [2024-07-24 23:17:31.741328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.749769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.750040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.750060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.756843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.757048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.757069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.762534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.762676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.762696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.769328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.769825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.769845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.775727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.775853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.775872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.781414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.781570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.781589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.786665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.786815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.786834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.791622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.791754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.791774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.797843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.798003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.798022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.803875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.804078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.804097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.818472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.818894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.818915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.827811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.827991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.828012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.835390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.835534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.835553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.841048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.841207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.841229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.846894] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.847029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.847049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.853369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.853443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.853465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.859626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.859737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.859756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.865054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.865211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.865230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.869967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.870089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.870108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.875078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.875214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.875233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.880570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.880674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.880692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.885350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.885432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.885451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.890530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.890624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.890643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.895844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.896375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.896395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.910385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.910614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.910633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.919276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.919476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.919503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.925764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.925982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.926001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.933954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.934114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.934134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.941193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.941370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.941389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.949265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.949410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.949429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.957006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.957185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.957204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.963362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.963607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.963627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.968759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.968926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.968944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.974354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.974580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.974600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.979926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.980083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.980102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.985561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.985769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.985788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.991308] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.991476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.991495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:31.996847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:31.997031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:31.997049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:32.002044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:32.002192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:32.002211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:32.008105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:32.008253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:32.008272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:32.013662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:32.013917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:32.013937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:32.019654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.608 [2024-07-24 23:17:32.019862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.608 [2024-07-24 23:17:32.019884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.608 [2024-07-24 23:17:32.025791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.609 [2024-07-24 23:17:32.025940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.609 [2024-07-24 23:17:32.025959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.609 [2024-07-24 23:17:32.031349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.609 [2024-07-24 23:17:32.031524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.609 [2024-07-24 23:17:32.031543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.866 [2024-07-24 23:17:32.037876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.866 [2024-07-24 23:17:32.037980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.866 [2024-07-24 23:17:32.037998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.866 [2024-07-24 23:17:32.043073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.866 [2024-07-24 23:17:32.043163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.866 [2024-07-24 23:17:32.043182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.866 [2024-07-24 23:17:32.048447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.866 [2024-07-24 23:17:32.048703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.866 [2024-07-24 23:17:32.048728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.866 [2024-07-24 23:17:32.054624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.866 [2024-07-24 23:17:32.054819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.866 [2024-07-24 23:17:32.054838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.866 [2024-07-24 23:17:32.060199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.866 [2024-07-24 23:17:32.060388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.866 [2024-07-24 23:17:32.060407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.866 [2024-07-24 23:17:32.066855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.866 [2024-07-24 23:17:32.067060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.866 [2024-07-24 23:17:32.067080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.866 [2024-07-24 23:17:32.072942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.866 [2024-07-24 23:17:32.073065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.866 [2024-07-24 23:17:32.073084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.866 [2024-07-24 23:17:32.078126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.866 [2024-07-24 23:17:32.078225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.078244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.083833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.083942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.083961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.089065] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.089198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.089217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.093871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.094007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.094027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.098759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.099001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.099021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.104445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.104611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.104630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.111772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.111946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.111964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.118878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.119026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.119044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.127238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.127463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.127484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.136579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.137096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.137116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.144762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.144989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.145010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.152467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.152966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.152986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.160727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.161020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.161040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.168809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.169047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.169067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.177186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.177440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.177460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.185272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.185497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.185517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.192667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.192833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.192855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.200705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.200908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.200927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.209072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.209272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.209290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.215993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.216184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.216202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.223506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.223738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.223774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.231937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.232114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.232133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.240174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.240395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.240415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.249178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.249393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.249413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.257892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.258078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.258096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.266646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.266861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.867 [2024-07-24 23:17:32.266881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.867 [2024-07-24 23:17:32.274737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.867 [2024-07-24 23:17:32.274933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.868 [2024-07-24 23:17:32.274951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.868 [2024-07-24 23:17:32.283326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.868 [2024-07-24 23:17:32.283486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.868 [2024-07-24 23:17:32.283505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.868 [2024-07-24 23:17:32.290125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c06260) with pdu=0x2000190fef90 00:31:59.868 [2024-07-24 23:17:32.290370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.868 [2024-07-24 23:17:32.290391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.868 00:31:59.868 Latency(us) 00:31:59.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.868 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:59.868 nvme0n1 : 2.00 4682.66 585.33 0.00 0.00 3412.38 1966.08 16777.22 00:31:59.868 =================================================================================================================== 00:31:59.868 Total : 4682.66 585.33 0.00 0.00 3412.38 1966.08 16777.22 00:31:59.868 0 00:32:00.124 23:17:32 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:00.124 23:17:32 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:00.124 23:17:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:00.124 23:17:32 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:00.124 | .driver_specific 00:32:00.124 | .nvme_error 00:32:00.124 | .status_code 00:32:00.124 | .command_transient_transport_error' 00:32:00.124 23:17:32 -- host/digest.sh@71 -- # (( 302 > 0 )) 00:32:00.124 23:17:32 -- host/digest.sh@73 -- # killprocess 3401662 00:32:00.124 23:17:32 -- common/autotest_common.sh@926 -- # '[' -z 3401662 ']' 00:32:00.124 23:17:32 -- common/autotest_common.sh@930 -- # kill -0 3401662 00:32:00.124 23:17:32 -- common/autotest_common.sh@931 -- # uname 00:32:00.124 23:17:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:00.124 23:17:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3401662 00:32:00.124 23:17:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:00.124 23:17:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:00.124 23:17:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3401662' 00:32:00.124 killing process with pid 3401662 00:32:00.124 23:17:32 -- common/autotest_common.sh@945 -- # kill 3401662 00:32:00.124 Received shutdown signal, test time was about 2.000000 seconds 00:32:00.124 00:32:00.124 Latency(us) 00:32:00.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:00.124 =================================================================================================================== 00:32:00.124 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:00.124 23:17:32 -- common/autotest_common.sh@950 -- # wait 3401662 00:32:00.381 23:17:32 -- host/digest.sh@115 -- # killprocess 3399534 00:32:00.381 23:17:32 -- common/autotest_common.sh@926 -- # '[' -z 3399534 ']' 00:32:00.381 23:17:32 -- common/autotest_common.sh@930 -- # kill -0 3399534 00:32:00.381 23:17:32 -- common/autotest_common.sh@931 -- # uname 00:32:00.381 23:17:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:00.381 23:17:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3399534 00:32:00.381 23:17:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:00.381 23:17:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:00.381 23:17:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3399534' 00:32:00.381 killing process with pid 3399534 00:32:00.381 23:17:32 -- common/autotest_common.sh@945 -- # kill 3399534 00:32:00.381 23:17:32 -- common/autotest_common.sh@950 -- # wait 3399534 00:32:00.639 00:32:00.639 real 0m16.518s 00:32:00.639 user 0m30.857s 00:32:00.639 sys 0m5.088s 00:32:00.639 23:17:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:00.639 23:17:32 -- common/autotest_common.sh@10 -- # set +x 00:32:00.639 ************************************ 00:32:00.639 END TEST nvmf_digest_error 00:32:00.639 ************************************ 00:32:00.639 23:17:32 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:32:00.639 23:17:32 -- host/digest.sh@139 -- # nvmftestfini 00:32:00.639 23:17:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:00.639 23:17:32 -- nvmf/common.sh@116 -- # sync 00:32:00.639 23:17:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:00.639 23:17:32 -- nvmf/common.sh@119 -- # set +e 00:32:00.639 23:17:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:00.639 23:17:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:00.639 rmmod nvme_tcp 00:32:00.639 rmmod nvme_fabrics 00:32:00.639 rmmod nvme_keyring 00:32:00.639 23:17:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:00.639 23:17:33 -- nvmf/common.sh@123 -- # set -e 00:32:00.639 23:17:33 -- nvmf/common.sh@124 -- # return 0 00:32:00.639 23:17:33 -- nvmf/common.sh@477 -- # '[' -n 3399534 ']' 00:32:00.639 23:17:33 -- nvmf/common.sh@478 -- # killprocess 3399534 00:32:00.639 23:17:33 -- common/autotest_common.sh@926 -- # '[' -z 3399534 ']' 00:32:00.639 23:17:33 -- common/autotest_common.sh@930 -- # kill -0 3399534 00:32:00.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3399534) - No such process 00:32:00.639 23:17:33 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3399534 is not found' 00:32:00.639 Process with pid 3399534 is not found 00:32:00.639 23:17:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:00.639 23:17:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:00.639 23:17:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:00.639 23:17:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:00.639 23:17:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:00.639 23:17:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.639 23:17:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:00.639 23:17:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:03.167 23:17:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:03.167 00:32:03.167 real 0m40.821s 00:32:03.167 user 1m0.707s 00:32:03.167 sys 0m15.380s 00:32:03.167 23:17:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:03.167 23:17:35 -- common/autotest_common.sh@10 -- # set +x 00:32:03.167 ************************************ 00:32:03.167 END TEST nvmf_digest 00:32:03.167 ************************************ 00:32:03.167 23:17:35 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:32:03.167 23:17:35 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:32:03.167 23:17:35 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:32:03.167 23:17:35 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:03.167 23:17:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:03.167 23:17:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:03.167 23:17:35 -- common/autotest_common.sh@10 -- # set +x 00:32:03.167 ************************************ 00:32:03.167 START TEST nvmf_bdevperf 00:32:03.167 ************************************ 00:32:03.168 23:17:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:03.168 * Looking for test storage... 00:32:03.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:03.168 23:17:35 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:03.168 23:17:35 -- nvmf/common.sh@7 -- # uname -s 00:32:03.168 23:17:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:03.168 23:17:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:03.168 23:17:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:03.168 23:17:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:03.168 23:17:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:03.168 23:17:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:03.168 23:17:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:03.168 23:17:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:03.168 23:17:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:03.168 23:17:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:03.168 23:17:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:03.168 23:17:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:03.168 23:17:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:03.168 23:17:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:03.168 23:17:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:03.168 23:17:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:03.168 23:17:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:03.168 23:17:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:03.168 23:17:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:03.168 23:17:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.168 23:17:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.168 23:17:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.168 23:17:35 -- paths/export.sh@5 -- # export PATH 00:32:03.168 23:17:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.168 23:17:35 -- nvmf/common.sh@46 -- # : 0 00:32:03.168 23:17:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:03.168 23:17:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:03.168 23:17:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:03.168 23:17:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:03.168 23:17:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:03.168 23:17:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:03.168 23:17:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:03.168 23:17:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:03.168 23:17:35 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:03.168 23:17:35 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:03.168 23:17:35 -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:03.168 23:17:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:03.168 23:17:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:03.168 23:17:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:03.168 23:17:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:03.168 23:17:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:03.168 23:17:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.168 23:17:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:03.168 23:17:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:03.168 23:17:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:03.168 23:17:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:03.168 23:17:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:03.168 23:17:35 -- common/autotest_common.sh@10 -- # set +x 00:32:09.750 23:17:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:09.750 23:17:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:09.750 23:17:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:09.750 23:17:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:09.750 23:17:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:09.750 23:17:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:09.750 23:17:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:09.750 23:17:41 -- nvmf/common.sh@294 -- # net_devs=() 00:32:09.750 23:17:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:09.750 23:17:41 -- nvmf/common.sh@295 -- # e810=() 00:32:09.750 23:17:41 -- nvmf/common.sh@295 -- # local -ga e810 00:32:09.750 23:17:41 -- nvmf/common.sh@296 -- # x722=() 00:32:09.750 23:17:41 -- nvmf/common.sh@296 -- # local -ga x722 00:32:09.750 23:17:41 -- nvmf/common.sh@297 -- # mlx=() 00:32:09.750 23:17:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:09.750 23:17:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:09.750 23:17:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:09.750 23:17:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:09.750 23:17:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:09.750 23:17:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:09.750 23:17:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:09.750 23:17:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:09.750 23:17:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:09.750 23:17:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:09.750 23:17:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:09.750 23:17:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:09.750 23:17:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:09.750 23:17:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:09.750 23:17:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:09.750 23:17:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:09.750 23:17:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:09.750 23:17:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:09.750 23:17:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:09.750 23:17:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:09.750 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:09.750 23:17:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:09.750 23:17:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:09.750 23:17:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.750 23:17:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.750 23:17:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:09.750 23:17:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:09.750 23:17:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:09.750 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:09.750 23:17:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:09.750 23:17:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:09.750 23:17:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.750 23:17:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.750 23:17:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:09.750 23:17:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:09.750 23:17:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:09.750 23:17:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:09.750 23:17:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:09.750 23:17:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.750 23:17:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:09.750 23:17:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.750 23:17:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:09.750 Found net devices under 0000:af:00.0: cvl_0_0 00:32:09.750 23:17:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.750 23:17:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:09.750 23:17:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.750 23:17:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:09.750 23:17:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.750 23:17:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:09.750 Found net devices under 0000:af:00.1: cvl_0_1 00:32:09.750 23:17:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.750 23:17:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:09.750 23:17:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:09.750 23:17:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:09.750 23:17:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:09.750 23:17:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:09.750 23:17:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:09.750 23:17:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:09.750 23:17:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:09.750 23:17:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:09.750 23:17:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:09.750 23:17:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:09.750 23:17:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:09.750 23:17:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:09.750 23:17:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:09.750 23:17:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:09.750 23:17:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:09.750 23:17:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:09.750 23:17:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:09.750 23:17:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:09.750 23:17:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:09.750 23:17:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:09.750 23:17:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:09.750 23:17:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:09.750 23:17:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:09.750 23:17:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:09.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:09.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:32:09.750 00:32:09.750 --- 10.0.0.2 ping statistics --- 00:32:09.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.750 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:32:09.750 23:17:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:09.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:09.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:32:09.750 00:32:09.750 --- 10.0.0.1 ping statistics --- 00:32:09.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.750 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:32:09.750 23:17:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:09.750 23:17:41 -- nvmf/common.sh@410 -- # return 0 00:32:09.750 23:17:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:09.750 23:17:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:09.750 23:17:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:09.750 23:17:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:09.750 23:17:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:09.750 23:17:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:09.750 23:17:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:09.750 23:17:41 -- host/bdevperf.sh@25 -- # tgt_init 00:32:09.750 23:17:41 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:09.750 23:17:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:09.750 23:17:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:09.750 23:17:41 -- common/autotest_common.sh@10 -- # set +x 00:32:09.750 23:17:41 -- nvmf/common.sh@469 -- # nvmfpid=3405915 00:32:09.750 23:17:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:09.750 23:17:41 -- nvmf/common.sh@470 -- # waitforlisten 3405915 00:32:09.750 23:17:41 -- common/autotest_common.sh@819 -- # '[' -z 3405915 ']' 00:32:09.750 23:17:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.750 23:17:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:09.750 23:17:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.750 23:17:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:09.750 23:17:41 -- common/autotest_common.sh@10 -- # set +x 00:32:09.750 [2024-07-24 23:17:41.934216] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:09.750 [2024-07-24 23:17:41.934259] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.750 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.750 [2024-07-24 23:17:42.009660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:09.750 [2024-07-24 23:17:42.047446] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:09.750 [2024-07-24 23:17:42.047559] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.750 [2024-07-24 23:17:42.047569] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.750 [2024-07-24 23:17:42.047577] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.750 [2024-07-24 23:17:42.047621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:09.750 [2024-07-24 23:17:42.047734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:09.750 [2024-07-24 23:17:42.047736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.316 23:17:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:10.316 23:17:42 -- common/autotest_common.sh@852 -- # return 0 00:32:10.316 23:17:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:10.316 23:17:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:10.316 23:17:42 -- common/autotest_common.sh@10 -- # set +x 00:32:10.575 23:17:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:10.575 23:17:42 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:10.575 23:17:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:10.575 23:17:42 -- common/autotest_common.sh@10 -- # set +x 00:32:10.575 [2024-07-24 23:17:42.778016] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:10.575 23:17:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:10.575 23:17:42 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:10.575 23:17:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:10.575 23:17:42 -- common/autotest_common.sh@10 -- # set +x 00:32:10.575 Malloc0 00:32:10.575 23:17:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:10.575 23:17:42 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:10.575 23:17:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:10.575 23:17:42 -- common/autotest_common.sh@10 -- # set +x 00:32:10.575 23:17:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:10.575 23:17:42 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:10.575 23:17:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:10.575 23:17:42 -- common/autotest_common.sh@10 -- # set +x 00:32:10.575 23:17:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:10.575 23:17:42 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:10.575 23:17:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:10.575 23:17:42 -- common/autotest_common.sh@10 -- # set +x 00:32:10.575 [2024-07-24 23:17:42.836567] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:10.575 23:17:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:10.575 23:17:42 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:10.575 23:17:42 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:10.575 23:17:42 -- nvmf/common.sh@520 -- # config=() 00:32:10.575 23:17:42 -- nvmf/common.sh@520 -- # local subsystem config 00:32:10.575 23:17:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:10.575 23:17:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:10.575 { 00:32:10.575 "params": { 00:32:10.575 "name": "Nvme$subsystem", 00:32:10.575 "trtype": "$TEST_TRANSPORT", 00:32:10.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:10.575 "adrfam": "ipv4", 00:32:10.575 "trsvcid": "$NVMF_PORT", 00:32:10.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:10.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:10.575 "hdgst": ${hdgst:-false}, 00:32:10.575 "ddgst": ${ddgst:-false} 00:32:10.575 }, 00:32:10.575 "method": "bdev_nvme_attach_controller" 00:32:10.575 } 00:32:10.575 EOF 00:32:10.575 )") 00:32:10.575 23:17:42 -- nvmf/common.sh@542 -- # cat 00:32:10.575 23:17:42 -- nvmf/common.sh@544 -- # jq . 00:32:10.575 23:17:42 -- nvmf/common.sh@545 -- # IFS=, 00:32:10.575 23:17:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:10.575 "params": { 00:32:10.575 "name": "Nvme1", 00:32:10.575 "trtype": "tcp", 00:32:10.575 "traddr": "10.0.0.2", 00:32:10.575 "adrfam": "ipv4", 00:32:10.575 "trsvcid": "4420", 00:32:10.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:10.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:10.575 "hdgst": false, 00:32:10.575 "ddgst": false 00:32:10.575 }, 00:32:10.575 "method": "bdev_nvme_attach_controller" 00:32:10.575 }' 00:32:10.575 [2024-07-24 23:17:42.887780] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:10.575 [2024-07-24 23:17:42.887828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3406196 ] 00:32:10.575 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.575 [2024-07-24 23:17:42.959083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.575 [2024-07-24 23:17:42.995228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.140 Running I/O for 1 seconds... 00:32:12.073 00:32:12.073 Latency(us) 00:32:12.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.073 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:12.073 Verification LBA range: start 0x0 length 0x4000 00:32:12.073 Nvme1n1 : 1.00 17611.94 68.80 0.00 0.00 7240.22 1120.67 15623.78 00:32:12.073 =================================================================================================================== 00:32:12.073 Total : 17611.94 68.80 0.00 0.00 7240.22 1120.67 15623.78 00:32:12.073 23:17:44 -- host/bdevperf.sh@30 -- # bdevperfpid=3406471 00:32:12.073 23:17:44 -- host/bdevperf.sh@32 -- # sleep 3 00:32:12.073 23:17:44 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:12.073 23:17:44 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:12.073 23:17:44 -- nvmf/common.sh@520 -- # config=() 00:32:12.073 23:17:44 -- nvmf/common.sh@520 -- # local subsystem config 00:32:12.073 23:17:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:12.073 23:17:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:12.073 { 00:32:12.073 "params": { 00:32:12.073 "name": "Nvme$subsystem", 00:32:12.073 "trtype": "$TEST_TRANSPORT", 00:32:12.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:12.073 "adrfam": "ipv4", 00:32:12.073 "trsvcid": "$NVMF_PORT", 00:32:12.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:12.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:12.073 "hdgst": ${hdgst:-false}, 00:32:12.073 "ddgst": ${ddgst:-false} 00:32:12.073 }, 00:32:12.073 "method": "bdev_nvme_attach_controller" 00:32:12.073 } 00:32:12.073 EOF 00:32:12.073 )") 00:32:12.073 23:17:44 -- nvmf/common.sh@542 -- # cat 00:32:12.073 23:17:44 -- nvmf/common.sh@544 -- # jq . 00:32:12.073 23:17:44 -- nvmf/common.sh@545 -- # IFS=, 00:32:12.073 23:17:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:12.073 "params": { 00:32:12.073 "name": "Nvme1", 00:32:12.073 "trtype": "tcp", 00:32:12.073 "traddr": "10.0.0.2", 00:32:12.073 "adrfam": "ipv4", 00:32:12.073 "trsvcid": "4420", 00:32:12.073 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:12.073 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:12.073 "hdgst": false, 00:32:12.073 "ddgst": false 00:32:12.073 }, 00:32:12.073 "method": "bdev_nvme_attach_controller" 00:32:12.073 }' 00:32:12.331 [2024-07-24 23:17:44.528754] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:12.331 [2024-07-24 23:17:44.528811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3406471 ] 00:32:12.331 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.331 [2024-07-24 23:17:44.599879] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.331 [2024-07-24 23:17:44.633205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.589 Running I/O for 15 seconds... 00:32:15.122 23:17:47 -- host/bdevperf.sh@33 -- # kill -9 3405915 00:32:15.122 23:17:47 -- host/bdevperf.sh@35 -- # sleep 3 00:32:15.122 [2024-07-24 23:17:47.498642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.122 [2024-07-24 23:17:47.498683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.122 [2024-07-24 23:17:47.498704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.122 [2024-07-24 23:17:47.498720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.122 [2024-07-24 23:17:47.498732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.122 [2024-07-24 23:17:47.498743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.122 [2024-07-24 23:17:47.498755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.122 [2024-07-24 23:17:47.498766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.122 [2024-07-24 23:17:47.498778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.122 [2024-07-24 23:17:47.498788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.122 [2024-07-24 23:17:47.498800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:116064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.122 [2024-07-24 23:17:47.498810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.122 [2024-07-24 23:17:47.498821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.122 [2024-07-24 23:17:47.498831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.122 [2024-07-24 23:17:47.498842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.122 [2024-07-24 23:17:47.498852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.122 [2024-07-24 23:17:47.498865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.122 [2024-07-24 23:17:47.498875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.498887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.498897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.498913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.498924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.498934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:116760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.498944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.498955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.498966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.498980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.498991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:116840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:116912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:116920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:116928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.123 [2024-07-24 23:17:47.499681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.123 [2024-07-24 23:17:47.499691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.499703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:116992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.124 [2024-07-24 23:17:47.499712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.499843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.124 [2024-07-24 23:17:47.499852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.499863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.124 [2024-07-24 23:17:47.499872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.499883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.499893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.499903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.499912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.499923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.499932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.499943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.499952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.499963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.499973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.499984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:116440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.499993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:116448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:117048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.124 [2024-07-24 23:17:47.500135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:117056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.124 [2024-07-24 23:17:47.500155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:116488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:116512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:116560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.124 [2024-07-24 23:17:47.500462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.124 [2024-07-24 23:17:47.500482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.124 [2024-07-24 23:17:47.500543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:117152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.124 [2024-07-24 23:17:47.500563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.124 [2024-07-24 23:17:47.500584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.124 [2024-07-24 23:17:47.500595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.124 [2024-07-24 23:17:47.500604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.500616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.500625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.500636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.500645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.500656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.125 [2024-07-24 23:17:47.500665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.500675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.500685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.500695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.500705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.500719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.125 [2024-07-24 23:17:47.500729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.500740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:117224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.500749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.500760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.125 [2024-07-24 23:17:47.500769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.500780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.500790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.500801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:116576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.500810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.500829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.500838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.500850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.500859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.500870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:116632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.500880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.500890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:116640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.500900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.500911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.500920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.500934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:116664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.500944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.500954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.500963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.500974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:117248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.125 [2024-07-24 23:17:47.500983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.500993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.125 [2024-07-24 23:17:47.501003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.501013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.125 [2024-07-24 23:17:47.501022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.501033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.501042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.501052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:116680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.501061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.501072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.501081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.501091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:116712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.501102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.501113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.501122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.501133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.501142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.501154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:116736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.501163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.501174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.501183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.501194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:117272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.125 [2024-07-24 23:17:47.501203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.501213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.501222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.501233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.501242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.501254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.125 [2024-07-24 23:17:47.501263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.501273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.125 [2024-07-24 23:17:47.501282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.501293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.501302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.501313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:116808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.501322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.501332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:116816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.501341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.501354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.501363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.501373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.501382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.125 [2024-07-24 23:17:47.501393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.125 [2024-07-24 23:17:47.501402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.126 [2024-07-24 23:17:47.501413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:116952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.126 [2024-07-24 23:17:47.501422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.126 [2024-07-24 23:17:47.501432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb245c0 is same with the state(5) to be set 00:32:15.126 [2024-07-24 23:17:47.501444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:15.126 [2024-07-24 23:17:47.501452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:15.126 [2024-07-24 23:17:47.501460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116976 len:8 PRP1 0x0 PRP2 0x0 00:32:15.126 [2024-07-24 23:17:47.501469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.126 [2024-07-24 23:17:47.501516] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb245c0 was disconnected and freed. reset controller. 00:32:15.126 [2024-07-24 23:17:47.503276] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.126 [2024-07-24 23:17:47.503326] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.126 [2024-07-24 23:17:47.503832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.126 [2024-07-24 23:17:47.504029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.126 [2024-07-24 23:17:47.504042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.126 [2024-07-24 23:17:47.504052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.126 [2024-07-24 23:17:47.504197] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.126 [2024-07-24 23:17:47.504297] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.126 [2024-07-24 23:17:47.504308] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.126 [2024-07-24 23:17:47.504319] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.126 [2024-07-24 23:17:47.505998] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.126 [2024-07-24 23:17:47.515307] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.126 [2024-07-24 23:17:47.515751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.126 [2024-07-24 23:17:47.516005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.126 [2024-07-24 23:17:47.516017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.126 [2024-07-24 23:17:47.516030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.126 [2024-07-24 23:17:47.516144] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.126 [2024-07-24 23:17:47.516272] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.126 [2024-07-24 23:17:47.516281] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.126 [2024-07-24 23:17:47.516291] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.126 [2024-07-24 23:17:47.517864] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.126 [2024-07-24 23:17:47.527262] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.126 [2024-07-24 23:17:47.527603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.126 [2024-07-24 23:17:47.527855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.126 [2024-07-24 23:17:47.527870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.126 [2024-07-24 23:17:47.527880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.126 [2024-07-24 23:17:47.528025] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.126 [2024-07-24 23:17:47.528152] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.126 [2024-07-24 23:17:47.528163] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.126 [2024-07-24 23:17:47.528172] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.126 [2024-07-24 23:17:47.529848] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.126 [2024-07-24 23:17:47.539218] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.126 [2024-07-24 23:17:47.539671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.126 [2024-07-24 23:17:47.539950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.126 [2024-07-24 23:17:47.539963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.126 [2024-07-24 23:17:47.539973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.126 [2024-07-24 23:17:47.540102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.126 [2024-07-24 23:17:47.540229] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.126 [2024-07-24 23:17:47.540240] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.126 [2024-07-24 23:17:47.540249] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.126 [2024-07-24 23:17:47.541938] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.386 [2024-07-24 23:17:47.551098] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.386 [2024-07-24 23:17:47.551489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.386 [2024-07-24 23:17:47.551803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.386 [2024-07-24 23:17:47.551816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.386 [2024-07-24 23:17:47.551826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.386 [2024-07-24 23:17:47.551943] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.386 [2024-07-24 23:17:47.552071] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.386 [2024-07-24 23:17:47.552081] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.386 [2024-07-24 23:17:47.552091] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.386 [2024-07-24 23:17:47.553737] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.386 [2024-07-24 23:17:47.563000] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.386 [2024-07-24 23:17:47.563413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.386 [2024-07-24 23:17:47.563732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.386 [2024-07-24 23:17:47.563745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.386 [2024-07-24 23:17:47.563755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.386 [2024-07-24 23:17:47.563884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.386 [2024-07-24 23:17:47.563996] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.386 [2024-07-24 23:17:47.564006] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.386 [2024-07-24 23:17:47.564015] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.386 [2024-07-24 23:17:47.565666] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.386 [2024-07-24 23:17:47.575040] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.386 [2024-07-24 23:17:47.575399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.386 [2024-07-24 23:17:47.575661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.386 [2024-07-24 23:17:47.575673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.386 [2024-07-24 23:17:47.575683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.386 [2024-07-24 23:17:47.575844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.386 [2024-07-24 23:17:47.575986] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.386 [2024-07-24 23:17:47.575996] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.386 [2024-07-24 23:17:47.576005] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.386 [2024-07-24 23:17:47.577617] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.386 [2024-07-24 23:17:47.587008] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.386 [2024-07-24 23:17:47.587399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.386 [2024-07-24 23:17:47.587652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.386 [2024-07-24 23:17:47.587665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.386 [2024-07-24 23:17:47.587675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.386 [2024-07-24 23:17:47.587834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.386 [2024-07-24 23:17:47.587933] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.386 [2024-07-24 23:17:47.587943] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.386 [2024-07-24 23:17:47.587952] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.386 [2024-07-24 23:17:47.589643] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.386 [2024-07-24 23:17:47.598818] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.386 [2024-07-24 23:17:47.599174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.386 [2024-07-24 23:17:47.599490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.386 [2024-07-24 23:17:47.599502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.386 [2024-07-24 23:17:47.599512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.386 [2024-07-24 23:17:47.599650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.386 [2024-07-24 23:17:47.599766] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.386 [2024-07-24 23:17:47.599776] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.386 [2024-07-24 23:17:47.599785] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.386 [2024-07-24 23:17:47.601425] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.386 [2024-07-24 23:17:47.610542] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.386 [2024-07-24 23:17:47.611039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.386 [2024-07-24 23:17:47.611423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.386 [2024-07-24 23:17:47.611467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.386 [2024-07-24 23:17:47.611476] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.386 [2024-07-24 23:17:47.611586] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.386 [2024-07-24 23:17:47.611682] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.386 [2024-07-24 23:17:47.611691] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.386 [2024-07-24 23:17:47.611700] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.386 [2024-07-24 23:17:47.613427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.386 [2024-07-24 23:17:47.622473] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.387 [2024-07-24 23:17:47.622830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.387 [2024-07-24 23:17:47.623131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.387 [2024-07-24 23:17:47.623144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.387 [2024-07-24 23:17:47.623153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.387 [2024-07-24 23:17:47.623263] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.387 [2024-07-24 23:17:47.623359] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.387 [2024-07-24 23:17:47.623371] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.387 [2024-07-24 23:17:47.623380] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.387 [2024-07-24 23:17:47.625127] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.387 [2024-07-24 23:17:47.634272] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.387 [2024-07-24 23:17:47.634724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.387 [2024-07-24 23:17:47.635023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.387 [2024-07-24 23:17:47.635064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.387 [2024-07-24 23:17:47.635096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.387 [2024-07-24 23:17:47.635585] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.387 [2024-07-24 23:17:47.636105] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.387 [2024-07-24 23:17:47.636140] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.387 [2024-07-24 23:17:47.636171] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.387 [2024-07-24 23:17:47.637862] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.387 [2024-07-24 23:17:47.646246] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.387 [2024-07-24 23:17:47.646657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.387 [2024-07-24 23:17:47.646839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.387 [2024-07-24 23:17:47.646852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.387 [2024-07-24 23:17:47.646862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.387 [2024-07-24 23:17:47.646960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.387 [2024-07-24 23:17:47.647055] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.387 [2024-07-24 23:17:47.647065] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.387 [2024-07-24 23:17:47.647074] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.387 [2024-07-24 23:17:47.648662] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.387 [2024-07-24 23:17:47.657983] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.387 [2024-07-24 23:17:47.658379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.387 [2024-07-24 23:17:47.658685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.387 [2024-07-24 23:17:47.658736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.387 [2024-07-24 23:17:47.658769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.387 [2024-07-24 23:17:47.659059] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.387 [2024-07-24 23:17:47.659246] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.387 [2024-07-24 23:17:47.659256] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.387 [2024-07-24 23:17:47.659267] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.387 [2024-07-24 23:17:47.660784] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.387 [2024-07-24 23:17:47.669630] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.387 [2024-07-24 23:17:47.670105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.387 [2024-07-24 23:17:47.670383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.387 [2024-07-24 23:17:47.670423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.387 [2024-07-24 23:17:47.670455] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.387 [2024-07-24 23:17:47.670858] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.387 [2024-07-24 23:17:47.671202] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.387 [2024-07-24 23:17:47.671236] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.387 [2024-07-24 23:17:47.671267] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.387 [2024-07-24 23:17:47.673776] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.387 [2024-07-24 23:17:47.681958] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.387 [2024-07-24 23:17:47.682427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.387 [2024-07-24 23:17:47.682670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.387 [2024-07-24 23:17:47.682710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.387 [2024-07-24 23:17:47.682761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.387 [2024-07-24 23:17:47.683051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.387 [2024-07-24 23:17:47.683230] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.387 [2024-07-24 23:17:47.683240] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.387 [2024-07-24 23:17:47.683249] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.387 [2024-07-24 23:17:47.684811] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.387 [2024-07-24 23:17:47.693587] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.387 [2024-07-24 23:17:47.694030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.387 [2024-07-24 23:17:47.694263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.387 [2024-07-24 23:17:47.694275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.387 [2024-07-24 23:17:47.694284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.387 [2024-07-24 23:17:47.694394] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.387 [2024-07-24 23:17:47.694517] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.387 [2024-07-24 23:17:47.694530] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.387 [2024-07-24 23:17:47.694540] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.387 [2024-07-24 23:17:47.696244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.387 [2024-07-24 23:17:47.705593] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.387 [2024-07-24 23:17:47.706094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.387 [2024-07-24 23:17:47.706429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.387 [2024-07-24 23:17:47.706470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.387 [2024-07-24 23:17:47.706502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.387 [2024-07-24 23:17:47.706721] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.387 [2024-07-24 23:17:47.706832] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.387 [2024-07-24 23:17:47.706842] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.387 [2024-07-24 23:17:47.706851] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.387 [2024-07-24 23:17:47.708463] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.387 [2024-07-24 23:17:47.717341] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.387 [2024-07-24 23:17:47.717805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.387 [2024-07-24 23:17:47.718133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.387 [2024-07-24 23:17:47.718173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.387 [2024-07-24 23:17:47.718205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.387 [2024-07-24 23:17:47.718634] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.387 [2024-07-24 23:17:47.718787] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.387 [2024-07-24 23:17:47.718798] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.387 [2024-07-24 23:17:47.718807] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.387 [2024-07-24 23:17:47.720460] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.387 [2024-07-24 23:17:47.729113] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.387 [2024-07-24 23:17:47.729559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.387 [2024-07-24 23:17:47.729932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.387 [2024-07-24 23:17:47.729974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.388 [2024-07-24 23:17:47.730006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.388 [2024-07-24 23:17:47.730250] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.388 [2024-07-24 23:17:47.730374] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.388 [2024-07-24 23:17:47.730384] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.388 [2024-07-24 23:17:47.730394] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.388 [2024-07-24 23:17:47.731958] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.388 [2024-07-24 23:17:47.740886] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.388 [2024-07-24 23:17:47.741367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.388 [2024-07-24 23:17:47.741767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.388 [2024-07-24 23:17:47.741810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.388 [2024-07-24 23:17:47.741860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.388 [2024-07-24 23:17:47.742102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.388 [2024-07-24 23:17:47.742420] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.388 [2024-07-24 23:17:47.742435] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.388 [2024-07-24 23:17:47.742447] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.388 [2024-07-24 23:17:47.745007] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.388 [2024-07-24 23:17:47.753058] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.388 [2024-07-24 23:17:47.753518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.388 [2024-07-24 23:17:47.753772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.388 [2024-07-24 23:17:47.753784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.388 [2024-07-24 23:17:47.753794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.388 [2024-07-24 23:17:47.753901] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.388 [2024-07-24 23:17:47.753997] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.388 [2024-07-24 23:17:47.754006] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.388 [2024-07-24 23:17:47.754015] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.388 [2024-07-24 23:17:47.755595] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.388 [2024-07-24 23:17:47.764934] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.388 [2024-07-24 23:17:47.765436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.388 [2024-07-24 23:17:47.765810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.388 [2024-07-24 23:17:47.765853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.388 [2024-07-24 23:17:47.765885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.388 [2024-07-24 23:17:47.766177] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.388 [2024-07-24 23:17:47.766469] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.388 [2024-07-24 23:17:47.766502] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.388 [2024-07-24 23:17:47.766526] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.388 [2024-07-24 23:17:47.768189] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.388 [2024-07-24 23:17:47.776756] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.388 [2024-07-24 23:17:47.777083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.388 [2024-07-24 23:17:47.777407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.388 [2024-07-24 23:17:47.777448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.388 [2024-07-24 23:17:47.777480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.388 [2024-07-24 23:17:47.777935] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.388 [2024-07-24 23:17:47.778327] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.388 [2024-07-24 23:17:47.778362] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.388 [2024-07-24 23:17:47.778392] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.388 [2024-07-24 23:17:47.780325] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.388 [2024-07-24 23:17:47.788407] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.388 [2024-07-24 23:17:47.788811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.388 [2024-07-24 23:17:47.789132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.388 [2024-07-24 23:17:47.789172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.388 [2024-07-24 23:17:47.789204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.388 [2024-07-24 23:17:47.789443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.388 [2024-07-24 23:17:47.789822] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.388 [2024-07-24 23:17:47.789833] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.388 [2024-07-24 23:17:47.789842] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.388 [2024-07-24 23:17:47.791415] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.388 [2024-07-24 23:17:47.800041] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.388 [2024-07-24 23:17:47.800359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.388 [2024-07-24 23:17:47.800671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.388 [2024-07-24 23:17:47.800683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.388 [2024-07-24 23:17:47.800738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.388 [2024-07-24 23:17:47.801227] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.388 [2024-07-24 23:17:47.801731] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.388 [2024-07-24 23:17:47.801767] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.388 [2024-07-24 23:17:47.801789] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.388 [2024-07-24 23:17:47.803503] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.388 [2024-07-24 23:17:47.811969] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.388 [2024-07-24 23:17:47.812464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.388 [2024-07-24 23:17:47.812785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.388 [2024-07-24 23:17:47.812803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.388 [2024-07-24 23:17:47.812813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.388 [2024-07-24 23:17:47.812955] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.388 [2024-07-24 23:17:47.813069] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.388 [2024-07-24 23:17:47.813079] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.388 [2024-07-24 23:17:47.813088] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.648 [2024-07-24 23:17:47.814971] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.648 [2024-07-24 23:17:47.823834] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.648 [2024-07-24 23:17:47.824292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.648 [2024-07-24 23:17:47.824597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.648 [2024-07-24 23:17:47.824637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.648 [2024-07-24 23:17:47.824669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.648 [2024-07-24 23:17:47.824841] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.648 [2024-07-24 23:17:47.824956] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.648 [2024-07-24 23:17:47.824966] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.648 [2024-07-24 23:17:47.824975] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.648 [2024-07-24 23:17:47.826689] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.648 [2024-07-24 23:17:47.835461] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.648 [2024-07-24 23:17:47.835854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.648 [2024-07-24 23:17:47.836180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.648 [2024-07-24 23:17:47.836192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.648 [2024-07-24 23:17:47.836234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.648 [2024-07-24 23:17:47.836575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.648 [2024-07-24 23:17:47.836874] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.648 [2024-07-24 23:17:47.836885] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.648 [2024-07-24 23:17:47.836894] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.648 [2024-07-24 23:17:47.838506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.648 [2024-07-24 23:17:47.847217] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.648 [2024-07-24 23:17:47.847727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.648 [2024-07-24 23:17:47.848134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.648 [2024-07-24 23:17:47.848174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.648 [2024-07-24 23:17:47.848214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.648 [2024-07-24 23:17:47.848703] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.648 [2024-07-24 23:17:47.848850] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.648 [2024-07-24 23:17:47.848860] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.648 [2024-07-24 23:17:47.848869] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.648 [2024-07-24 23:17:47.850560] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.648 [2024-07-24 23:17:47.859034] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.648 [2024-07-24 23:17:47.859444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.648 [2024-07-24 23:17:47.859756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.648 [2024-07-24 23:17:47.859798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.648 [2024-07-24 23:17:47.859830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.648 [2024-07-24 23:17:47.860059] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.648 [2024-07-24 23:17:47.860155] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.648 [2024-07-24 23:17:47.860165] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.648 [2024-07-24 23:17:47.860174] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.648 [2024-07-24 23:17:47.861971] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.648 [2024-07-24 23:17:47.870855] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.648 [2024-07-24 23:17:47.871324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.648 [2024-07-24 23:17:47.871771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.648 [2024-07-24 23:17:47.871812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.648 [2024-07-24 23:17:47.871844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.648 [2024-07-24 23:17:47.871941] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.648 [2024-07-24 23:17:47.872023] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.648 [2024-07-24 23:17:47.872032] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.648 [2024-07-24 23:17:47.872041] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.648 [2024-07-24 23:17:47.873692] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.648 [2024-07-24 23:17:47.882658] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.648 [2024-07-24 23:17:47.883104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.648 [2024-07-24 23:17:47.883428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.648 [2024-07-24 23:17:47.883468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.648 [2024-07-24 23:17:47.883500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.648 [2024-07-24 23:17:47.883944] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.649 [2024-07-24 23:17:47.884083] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.649 [2024-07-24 23:17:47.884093] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.649 [2024-07-24 23:17:47.884102] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.649 [2024-07-24 23:17:47.885717] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.649 [2024-07-24 23:17:47.894368] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.649 [2024-07-24 23:17:47.894849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.649 [2024-07-24 23:17:47.895207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.649 [2024-07-24 23:17:47.895248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.649 [2024-07-24 23:17:47.895279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.649 [2024-07-24 23:17:47.895618] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.649 [2024-07-24 23:17:47.895721] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.649 [2024-07-24 23:17:47.895730] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.649 [2024-07-24 23:17:47.895739] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.649 [2024-07-24 23:17:47.897469] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.649 [2024-07-24 23:17:47.906158] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.649 [2024-07-24 23:17:47.906594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.649 [2024-07-24 23:17:47.906901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.649 [2024-07-24 23:17:47.906944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.649 [2024-07-24 23:17:47.906976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.649 [2024-07-24 23:17:47.907316] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.649 [2024-07-24 23:17:47.907559] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.649 [2024-07-24 23:17:47.907593] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.649 [2024-07-24 23:17:47.907623] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.649 [2024-07-24 23:17:47.909692] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.649 [2024-07-24 23:17:47.917892] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.649 [2024-07-24 23:17:47.918350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.649 [2024-07-24 23:17:47.918666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.649 [2024-07-24 23:17:47.918678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.649 [2024-07-24 23:17:47.918687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.649 [2024-07-24 23:17:47.918790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.649 [2024-07-24 23:17:47.918875] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.649 [2024-07-24 23:17:47.918885] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.649 [2024-07-24 23:17:47.918894] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.649 [2024-07-24 23:17:47.920586] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.649 [2024-07-24 23:17:47.929691] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.649 [2024-07-24 23:17:47.930149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.649 [2024-07-24 23:17:47.930578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.649 [2024-07-24 23:17:47.930618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.649 [2024-07-24 23:17:47.930650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.649 [2024-07-24 23:17:47.930839] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.649 [2024-07-24 23:17:47.930921] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.649 [2024-07-24 23:17:47.930931] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.649 [2024-07-24 23:17:47.930940] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.649 [2024-07-24 23:17:47.932525] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.649 [2024-07-24 23:17:47.941420] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.649 [2024-07-24 23:17:47.941852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.649 [2024-07-24 23:17:47.942160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.649 [2024-07-24 23:17:47.942201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.649 [2024-07-24 23:17:47.942233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.649 [2024-07-24 23:17:47.942572] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.649 [2024-07-24 23:17:47.942814] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.649 [2024-07-24 23:17:47.942825] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.649 [2024-07-24 23:17:47.942833] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.649 [2024-07-24 23:17:47.944613] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.649 [2024-07-24 23:17:47.953171] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.649 [2024-07-24 23:17:47.953644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.649 [2024-07-24 23:17:47.953963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.649 [2024-07-24 23:17:47.954005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.649 [2024-07-24 23:17:47.954037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.649 [2024-07-24 23:17:47.954205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.649 [2024-07-24 23:17:47.954324] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.649 [2024-07-24 23:17:47.954338] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.649 [2024-07-24 23:17:47.954354] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.649 [2024-07-24 23:17:47.956814] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.649 [2024-07-24 23:17:47.965310] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.649 [2024-07-24 23:17:47.965658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.649 [2024-07-24 23:17:47.965971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.649 [2024-07-24 23:17:47.965984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.649 [2024-07-24 23:17:47.965993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.649 [2024-07-24 23:17:47.966089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.649 [2024-07-24 23:17:47.966213] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.649 [2024-07-24 23:17:47.966223] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.649 [2024-07-24 23:17:47.966231] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.649 [2024-07-24 23:17:47.967903] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.649 [2024-07-24 23:17:47.977090] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.649 [2024-07-24 23:17:47.977463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.649 [2024-07-24 23:17:47.977796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.649 [2024-07-24 23:17:47.977809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.649 [2024-07-24 23:17:47.977818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.649 [2024-07-24 23:17:47.977928] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.649 [2024-07-24 23:17:47.978080] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.649 [2024-07-24 23:17:47.978091] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.649 [2024-07-24 23:17:47.978099] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.649 [2024-07-24 23:17:47.979673] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.649 [2024-07-24 23:17:47.988732] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.649 [2024-07-24 23:17:47.989170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.649 [2024-07-24 23:17:47.989483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.649 [2024-07-24 23:17:47.989495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.649 [2024-07-24 23:17:47.989504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.650 [2024-07-24 23:17:47.989581] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.650 [2024-07-24 23:17:47.989673] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.650 [2024-07-24 23:17:47.989682] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.650 [2024-07-24 23:17:47.989690] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.650 [2024-07-24 23:17:47.991315] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.650 [2024-07-24 23:17:48.000415] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.650 [2024-07-24 23:17:48.000808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.650 [2024-07-24 23:17:48.001122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.650 [2024-07-24 23:17:48.001135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.650 [2024-07-24 23:17:48.001144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.650 [2024-07-24 23:17:48.001268] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.650 [2024-07-24 23:17:48.001364] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.650 [2024-07-24 23:17:48.001374] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.650 [2024-07-24 23:17:48.001383] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.650 [2024-07-24 23:17:48.002924] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.650 [2024-07-24 23:17:48.012305] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.650 [2024-07-24 23:17:48.012783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.650 [2024-07-24 23:17:48.013022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.650 [2024-07-24 23:17:48.013035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.650 [2024-07-24 23:17:48.013044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.650 [2024-07-24 23:17:48.013155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.650 [2024-07-24 23:17:48.013250] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.650 [2024-07-24 23:17:48.013260] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.650 [2024-07-24 23:17:48.013269] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.650 [2024-07-24 23:17:48.015073] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.650 [2024-07-24 23:17:48.024387] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.650 [2024-07-24 23:17:48.024858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.650 [2024-07-24 23:17:48.025173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.650 [2024-07-24 23:17:48.025186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.650 [2024-07-24 23:17:48.025195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.650 [2024-07-24 23:17:48.025309] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.650 [2024-07-24 23:17:48.025451] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.650 [2024-07-24 23:17:48.025461] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.650 [2024-07-24 23:17:48.025471] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.650 [2024-07-24 23:17:48.027091] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.650 [2024-07-24 23:17:48.036155] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.650 [2024-07-24 23:17:48.036656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.650 [2024-07-24 23:17:48.036968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.650 [2024-07-24 23:17:48.037010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.650 [2024-07-24 23:17:48.037043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.650 [2024-07-24 23:17:48.037243] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.650 [2024-07-24 23:17:48.037385] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.650 [2024-07-24 23:17:48.037396] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.650 [2024-07-24 23:17:48.037405] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.650 [2024-07-24 23:17:48.039070] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.650 [2024-07-24 23:17:48.047978] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.650 [2024-07-24 23:17:48.048431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.650 [2024-07-24 23:17:48.048777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.650 [2024-07-24 23:17:48.048819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.650 [2024-07-24 23:17:48.048851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.650 [2024-07-24 23:17:48.049104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.650 [2024-07-24 23:17:48.049196] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.650 [2024-07-24 23:17:48.049206] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.650 [2024-07-24 23:17:48.049214] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.650 [2024-07-24 23:17:48.050972] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.650 [2024-07-24 23:17:48.059829] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.650 [2024-07-24 23:17:48.060301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.650 [2024-07-24 23:17:48.060686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.650 [2024-07-24 23:17:48.060739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.650 [2024-07-24 23:17:48.060772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.650 [2024-07-24 23:17:48.061209] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.650 [2024-07-24 23:17:48.061306] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.650 [2024-07-24 23:17:48.061316] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.650 [2024-07-24 23:17:48.061325] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.650 [2024-07-24 23:17:48.063060] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.650 [2024-07-24 23:17:48.071573] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.650 [2024-07-24 23:17:48.072089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.650 [2024-07-24 23:17:48.072398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.650 [2024-07-24 23:17:48.072438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.650 [2024-07-24 23:17:48.072470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.650 [2024-07-24 23:17:48.072878] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.650 [2024-07-24 23:17:48.073213] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.650 [2024-07-24 23:17:48.073223] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.650 [2024-07-24 23:17:48.073232] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.650 [2024-07-24 23:17:48.075046] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.910 [2024-07-24 23:17:48.083425] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.910 [2024-07-24 23:17:48.083859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.910 [2024-07-24 23:17:48.084178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.910 [2024-07-24 23:17:48.084190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.910 [2024-07-24 23:17:48.084200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.910 [2024-07-24 23:17:48.084313] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.910 [2024-07-24 23:17:48.084464] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.910 [2024-07-24 23:17:48.084474] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.910 [2024-07-24 23:17:48.084483] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.910 [2024-07-24 23:17:48.086020] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.910 [2024-07-24 23:17:48.095179] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.910 [2024-07-24 23:17:48.095620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.910 [2024-07-24 23:17:48.095998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.910 [2024-07-24 23:17:48.096040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.910 [2024-07-24 23:17:48.096072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.910 [2024-07-24 23:17:48.096314] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.910 [2024-07-24 23:17:48.096424] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.910 [2024-07-24 23:17:48.096435] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.910 [2024-07-24 23:17:48.096444] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.910 [2024-07-24 23:17:48.098045] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.910 [2024-07-24 23:17:48.106921] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.910 [2024-07-24 23:17:48.107380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.910 [2024-07-24 23:17:48.107698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.910 [2024-07-24 23:17:48.107742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.910 [2024-07-24 23:17:48.107777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.910 [2024-07-24 23:17:48.108166] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.910 [2024-07-24 23:17:48.108352] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.910 [2024-07-24 23:17:48.108362] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.910 [2024-07-24 23:17:48.108371] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.910 [2024-07-24 23:17:48.109943] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.910 [2024-07-24 23:17:48.118698] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.910 [2024-07-24 23:17:48.119164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.910 [2024-07-24 23:17:48.119585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.910 [2024-07-24 23:17:48.119626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.910 [2024-07-24 23:17:48.119658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.910 [2024-07-24 23:17:48.119778] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.910 [2024-07-24 23:17:48.119889] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.910 [2024-07-24 23:17:48.119899] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.910 [2024-07-24 23:17:48.119908] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.910 [2024-07-24 23:17:48.121600] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.910 [2024-07-24 23:17:48.130345] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.910 [2024-07-24 23:17:48.130793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.910 [2024-07-24 23:17:48.131115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.910 [2024-07-24 23:17:48.131126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.910 [2024-07-24 23:17:48.131136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.910 [2024-07-24 23:17:48.131274] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.910 [2024-07-24 23:17:48.131371] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.910 [2024-07-24 23:17:48.131380] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.910 [2024-07-24 23:17:48.131389] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.910 [2024-07-24 23:17:48.133068] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.910 [2024-07-24 23:17:48.142006] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.910 [2024-07-24 23:17:48.142384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.910 [2024-07-24 23:17:48.142675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.910 [2024-07-24 23:17:48.142686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.910 [2024-07-24 23:17:48.142699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.910 [2024-07-24 23:17:48.142829] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.910 [2024-07-24 23:17:48.142967] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.910 [2024-07-24 23:17:48.142976] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.910 [2024-07-24 23:17:48.142985] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.910 [2024-07-24 23:17:48.144599] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.910 [2024-07-24 23:17:48.153711] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.910 [2024-07-24 23:17:48.154199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.910 [2024-07-24 23:17:48.154520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.910 [2024-07-24 23:17:48.154560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.910 [2024-07-24 23:17:48.154592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.910 [2024-07-24 23:17:48.155197] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.910 [2024-07-24 23:17:48.155589] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.910 [2024-07-24 23:17:48.155623] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.910 [2024-07-24 23:17:48.155654] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.910 [2024-07-24 23:17:48.157320] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.910 [2024-07-24 23:17:48.165507] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.910 [2024-07-24 23:17:48.165875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.910 [2024-07-24 23:17:48.166118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.910 [2024-07-24 23:17:48.166130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.910 [2024-07-24 23:17:48.166139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.910 [2024-07-24 23:17:48.166277] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.910 [2024-07-24 23:17:48.166401] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.911 [2024-07-24 23:17:48.166411] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.911 [2024-07-24 23:17:48.166420] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.911 [2024-07-24 23:17:48.168069] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.911 [2024-07-24 23:17:48.177156] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.911 [2024-07-24 23:17:48.177630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.911 [2024-07-24 23:17:48.177946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.911 [2024-07-24 23:17:48.177958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.911 [2024-07-24 23:17:48.177968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.911 [2024-07-24 23:17:48.178067] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.911 [2024-07-24 23:17:48.178163] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.911 [2024-07-24 23:17:48.178172] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.911 [2024-07-24 23:17:48.178181] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.911 [2024-07-24 23:17:48.179901] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.911 [2024-07-24 23:17:48.188910] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.911 [2024-07-24 23:17:48.189356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.911 [2024-07-24 23:17:48.189673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.911 [2024-07-24 23:17:48.189685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.911 [2024-07-24 23:17:48.189694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.911 [2024-07-24 23:17:48.189823] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.911 [2024-07-24 23:17:48.189933] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.911 [2024-07-24 23:17:48.189942] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.911 [2024-07-24 23:17:48.189951] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.911 [2024-07-24 23:17:48.191527] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.911 [2024-07-24 23:17:48.200848] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.911 [2024-07-24 23:17:48.201244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.911 [2024-07-24 23:17:48.201565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.911 [2024-07-24 23:17:48.201596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.911 [2024-07-24 23:17:48.201628] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.911 [2024-07-24 23:17:48.202130] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.911 [2024-07-24 23:17:48.202474] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.911 [2024-07-24 23:17:48.202507] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.911 [2024-07-24 23:17:48.202538] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.911 [2024-07-24 23:17:48.204232] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.911 [2024-07-24 23:17:48.212568] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.911 [2024-07-24 23:17:48.213006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.911 [2024-07-24 23:17:48.213234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.911 [2024-07-24 23:17:48.213247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.911 [2024-07-24 23:17:48.213257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.911 [2024-07-24 23:17:48.213382] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.911 [2024-07-24 23:17:48.213468] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.911 [2024-07-24 23:17:48.213478] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.911 [2024-07-24 23:17:48.213486] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.911 [2024-07-24 23:17:48.215074] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.911 [2024-07-24 23:17:48.224350] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.911 [2024-07-24 23:17:48.224835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.911 [2024-07-24 23:17:48.225148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.911 [2024-07-24 23:17:48.225188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.911 [2024-07-24 23:17:48.225220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.911 [2024-07-24 23:17:48.225559] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.911 [2024-07-24 23:17:48.225779] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.911 [2024-07-24 23:17:48.225790] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.911 [2024-07-24 23:17:48.225799] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.911 [2024-07-24 23:17:48.227502] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.911 [2024-07-24 23:17:48.236198] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.911 [2024-07-24 23:17:48.236583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.911 [2024-07-24 23:17:48.236816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.911 [2024-07-24 23:17:48.236829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.911 [2024-07-24 23:17:48.236838] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.911 [2024-07-24 23:17:48.236943] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.911 [2024-07-24 23:17:48.237034] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.911 [2024-07-24 23:17:48.237043] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.911 [2024-07-24 23:17:48.237051] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.911 [2024-07-24 23:17:48.238590] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.911 [2024-07-24 23:17:48.248073] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.911 [2024-07-24 23:17:48.248465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.911 [2024-07-24 23:17:48.248793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.911 [2024-07-24 23:17:48.248835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.911 [2024-07-24 23:17:48.248867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.911 [2024-07-24 23:17:48.249206] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.911 [2024-07-24 23:17:48.249597] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.911 [2024-07-24 23:17:48.249639] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.911 [2024-07-24 23:17:48.249670] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.911 [2024-07-24 23:17:48.251501] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.911 [2024-07-24 23:17:48.260013] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.911 [2024-07-24 23:17:48.260462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.911 [2024-07-24 23:17:48.260884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.911 [2024-07-24 23:17:48.260926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.911 [2024-07-24 23:17:48.260959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.911 [2024-07-24 23:17:48.261085] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.911 [2024-07-24 23:17:48.261222] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.911 [2024-07-24 23:17:48.261232] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.911 [2024-07-24 23:17:48.261241] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.911 [2024-07-24 23:17:48.263042] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.911 [2024-07-24 23:17:48.271984] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.911 [2024-07-24 23:17:48.272415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.911 [2024-07-24 23:17:48.272706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.911 [2024-07-24 23:17:48.272724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.911 [2024-07-24 23:17:48.272734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.911 [2024-07-24 23:17:48.272847] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.911 [2024-07-24 23:17:48.272946] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.911 [2024-07-24 23:17:48.272955] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.911 [2024-07-24 23:17:48.272964] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.911 [2024-07-24 23:17:48.274648] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.912 [2024-07-24 23:17:48.283835] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.912 [2024-07-24 23:17:48.284276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.912 [2024-07-24 23:17:48.284598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.912 [2024-07-24 23:17:48.284610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.912 [2024-07-24 23:17:48.284619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.912 [2024-07-24 23:17:48.284720] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.912 [2024-07-24 23:17:48.284821] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.912 [2024-07-24 23:17:48.284830] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.912 [2024-07-24 23:17:48.284842] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.912 [2024-07-24 23:17:48.286516] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.912 [2024-07-24 23:17:48.295653] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.912 [2024-07-24 23:17:48.296134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.912 [2024-07-24 23:17:48.296469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.912 [2024-07-24 23:17:48.296510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.912 [2024-07-24 23:17:48.296543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.912 [2024-07-24 23:17:48.296947] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.912 [2024-07-24 23:17:48.297059] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.912 [2024-07-24 23:17:48.297070] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.912 [2024-07-24 23:17:48.297079] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.912 [2024-07-24 23:17:48.298688] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.912 [2024-07-24 23:17:48.307374] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.912 [2024-07-24 23:17:48.307816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.912 [2024-07-24 23:17:48.308152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.912 [2024-07-24 23:17:48.308193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.912 [2024-07-24 23:17:48.308225] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.912 [2024-07-24 23:17:48.308564] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.912 [2024-07-24 23:17:48.308970] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.912 [2024-07-24 23:17:48.309006] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.912 [2024-07-24 23:17:48.309038] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.912 [2024-07-24 23:17:48.310774] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.912 [2024-07-24 23:17:48.319007] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.912 [2024-07-24 23:17:48.319412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.912 [2024-07-24 23:17:48.319680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.912 [2024-07-24 23:17:48.319735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.912 [2024-07-24 23:17:48.319769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.912 [2024-07-24 23:17:48.320108] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.912 [2024-07-24 23:17:48.320450] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.912 [2024-07-24 23:17:48.320484] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.912 [2024-07-24 23:17:48.320516] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.912 [2024-07-24 23:17:48.322357] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.912 [2024-07-24 23:17:48.330810] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.912 [2024-07-24 23:17:48.331246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.912 [2024-07-24 23:17:48.331544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.912 [2024-07-24 23:17:48.331584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:15.912 [2024-07-24 23:17:48.331616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:15.912 [2024-07-24 23:17:48.331972] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:15.912 [2024-07-24 23:17:48.332111] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.912 [2024-07-24 23:17:48.332122] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.912 [2024-07-24 23:17:48.332131] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.912 [2024-07-24 23:17:48.333733] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.171 [2024-07-24 23:17:48.342664] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.171 [2024-07-24 23:17:48.343098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.171 [2024-07-24 23:17:48.343425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.171 [2024-07-24 23:17:48.343439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.171 [2024-07-24 23:17:48.343448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.171 [2024-07-24 23:17:48.343533] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.171 [2024-07-24 23:17:48.343689] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.171 [2024-07-24 23:17:48.343700] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.171 [2024-07-24 23:17:48.343709] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.171 [2024-07-24 23:17:48.345342] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.171 [2024-07-24 23:17:48.354387] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.171 [2024-07-24 23:17:48.354778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.171 [2024-07-24 23:17:48.355101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.171 [2024-07-24 23:17:48.355141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.171 [2024-07-24 23:17:48.355174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.171 [2024-07-24 23:17:48.355615] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.171 [2024-07-24 23:17:48.356072] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.171 [2024-07-24 23:17:48.356109] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.171 [2024-07-24 23:17:48.356140] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.171 [2024-07-24 23:17:48.357911] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.171 [2024-07-24 23:17:48.366131] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.171 [2024-07-24 23:17:48.366583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.171 [2024-07-24 23:17:48.366954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.171 [2024-07-24 23:17:48.366996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.171 [2024-07-24 23:17:48.367028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.171 [2024-07-24 23:17:48.367143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.171 [2024-07-24 23:17:48.367268] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.172 [2024-07-24 23:17:48.367279] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.172 [2024-07-24 23:17:48.367288] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.172 [2024-07-24 23:17:48.368909] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.172 [2024-07-24 23:17:48.377848] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.172 [2024-07-24 23:17:48.378290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.172 [2024-07-24 23:17:48.378625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.172 [2024-07-24 23:17:48.378665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.172 [2024-07-24 23:17:48.378699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.172 [2024-07-24 23:17:48.379074] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.172 [2024-07-24 23:17:48.379167] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.172 [2024-07-24 23:17:48.379178] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.172 [2024-07-24 23:17:48.379188] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.172 [2024-07-24 23:17:48.380917] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.172 [2024-07-24 23:17:48.389637] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.172 [2024-07-24 23:17:48.390095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.172 [2024-07-24 23:17:48.390540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.172 [2024-07-24 23:17:48.390580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.172 [2024-07-24 23:17:48.390611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.172 [2024-07-24 23:17:48.391065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.172 [2024-07-24 23:17:48.391257] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.172 [2024-07-24 23:17:48.391268] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.172 [2024-07-24 23:17:48.391279] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.172 [2024-07-24 23:17:48.392756] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.172 [2024-07-24 23:17:48.401315] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.172 [2024-07-24 23:17:48.401789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.172 [2024-07-24 23:17:48.402143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.172 [2024-07-24 23:17:48.402183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.172 [2024-07-24 23:17:48.402216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.172 [2024-07-24 23:17:48.402706] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.172 [2024-07-24 23:17:48.403115] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.172 [2024-07-24 23:17:48.403150] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.172 [2024-07-24 23:17:48.403182] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.172 [2024-07-24 23:17:48.404819] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.172 [2024-07-24 23:17:48.413088] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.172 [2024-07-24 23:17:48.413573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.172 [2024-07-24 23:17:48.413909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.172 [2024-07-24 23:17:48.413952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.172 [2024-07-24 23:17:48.413984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.172 [2024-07-24 23:17:48.414374] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.172 [2024-07-24 23:17:48.414516] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.172 [2024-07-24 23:17:48.414528] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.172 [2024-07-24 23:17:48.414536] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.172 [2024-07-24 23:17:48.416058] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.172 [2024-07-24 23:17:48.424785] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.172 [2024-07-24 23:17:48.425211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.172 [2024-07-24 23:17:48.425514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.172 [2024-07-24 23:17:48.425554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.172 [2024-07-24 23:17:48.425586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.172 [2024-07-24 23:17:48.426043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.172 [2024-07-24 23:17:48.426259] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.172 [2024-07-24 23:17:48.426270] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.172 [2024-07-24 23:17:48.426279] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.172 [2024-07-24 23:17:48.427875] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.172 [2024-07-24 23:17:48.436545] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.172 [2024-07-24 23:17:48.437033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.172 [2024-07-24 23:17:48.437416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.172 [2024-07-24 23:17:48.437451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.172 [2024-07-24 23:17:48.437460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.172 [2024-07-24 23:17:48.437578] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.172 [2024-07-24 23:17:48.437737] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.172 [2024-07-24 23:17:48.437765] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.172 [2024-07-24 23:17:48.437775] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.172 [2024-07-24 23:17:48.439479] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.172 [2024-07-24 23:17:48.448348] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.172 [2024-07-24 23:17:48.448785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.172 [2024-07-24 23:17:48.449160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.172 [2024-07-24 23:17:48.449200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.172 [2024-07-24 23:17:48.449233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.172 [2024-07-24 23:17:48.449452] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.172 [2024-07-24 23:17:48.449652] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.172 [2024-07-24 23:17:48.449668] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.172 [2024-07-24 23:17:48.449681] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.172 [2024-07-24 23:17:48.452160] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.172 [2024-07-24 23:17:48.460322] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.172 [2024-07-24 23:17:48.460738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.172 [2024-07-24 23:17:48.461089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.172 [2024-07-24 23:17:48.461128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.172 [2024-07-24 23:17:48.461161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.172 [2024-07-24 23:17:48.461475] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.172 [2024-07-24 23:17:48.461598] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.172 [2024-07-24 23:17:48.461609] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.172 [2024-07-24 23:17:48.461618] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.172 [2024-07-24 23:17:48.463326] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.172 [2024-07-24 23:17:48.472133] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.172 [2024-07-24 23:17:48.472533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.172 [2024-07-24 23:17:48.472918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.172 [2024-07-24 23:17:48.472960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.172 [2024-07-24 23:17:48.472999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.172 [2024-07-24 23:17:48.473210] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.172 [2024-07-24 23:17:48.473301] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.172 [2024-07-24 23:17:48.473311] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.172 [2024-07-24 23:17:48.473320] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.172 [2024-07-24 23:17:48.474840] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.172 [2024-07-24 23:17:48.483790] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.173 [2024-07-24 23:17:48.484241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.173 [2024-07-24 23:17:48.484635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.173 [2024-07-24 23:17:48.484675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.173 [2024-07-24 23:17:48.484707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.173 [2024-07-24 23:17:48.485112] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.173 [2024-07-24 23:17:48.485335] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.173 [2024-07-24 23:17:48.485346] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.173 [2024-07-24 23:17:48.485356] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.173 [2024-07-24 23:17:48.486972] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.173 [2024-07-24 23:17:48.495485] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.173 [2024-07-24 23:17:48.495926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.173 [2024-07-24 23:17:48.496311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.173 [2024-07-24 23:17:48.496351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.173 [2024-07-24 23:17:48.496383] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.173 [2024-07-24 23:17:48.496624] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.173 [2024-07-24 23:17:48.496749] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.173 [2024-07-24 23:17:48.496761] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.173 [2024-07-24 23:17:48.496786] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.173 [2024-07-24 23:17:48.498378] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.173 [2024-07-24 23:17:48.507119] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.173 [2024-07-24 23:17:48.507588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.173 [2024-07-24 23:17:48.508007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.173 [2024-07-24 23:17:48.508051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.173 [2024-07-24 23:17:48.508083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.173 [2024-07-24 23:17:48.508480] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.173 [2024-07-24 23:17:48.508673] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.173 [2024-07-24 23:17:48.508684] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.173 [2024-07-24 23:17:48.508693] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.173 [2024-07-24 23:17:48.510292] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.173 [2024-07-24 23:17:48.519021] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.173 [2024-07-24 23:17:48.519443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.173 [2024-07-24 23:17:48.519769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.173 [2024-07-24 23:17:48.519812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.173 [2024-07-24 23:17:48.519845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.173 [2024-07-24 23:17:48.520333] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.173 [2024-07-24 23:17:48.520568] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.173 [2024-07-24 23:17:48.520579] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.173 [2024-07-24 23:17:48.520589] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.173 [2024-07-24 23:17:48.522123] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.173 [2024-07-24 23:17:48.530860] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.173 [2024-07-24 23:17:48.531296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.173 [2024-07-24 23:17:48.531606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.173 [2024-07-24 23:17:48.531646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.173 [2024-07-24 23:17:48.531678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.173 [2024-07-24 23:17:48.532134] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.173 [2024-07-24 23:17:48.532315] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.173 [2024-07-24 23:17:48.532327] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.173 [2024-07-24 23:17:48.532336] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.173 [2024-07-24 23:17:48.533931] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.173 [2024-07-24 23:17:48.542803] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.173 [2024-07-24 23:17:48.543238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.173 [2024-07-24 23:17:48.543493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.173 [2024-07-24 23:17:48.543533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.173 [2024-07-24 23:17:48.543565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.173 [2024-07-24 23:17:48.543918] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.173 [2024-07-24 23:17:48.544320] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.173 [2024-07-24 23:17:48.544355] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.173 [2024-07-24 23:17:48.544386] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.173 [2024-07-24 23:17:48.546231] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.173 [2024-07-24 23:17:48.554770] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.173 [2024-07-24 23:17:48.555282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.173 [2024-07-24 23:17:48.555640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.173 [2024-07-24 23:17:48.555681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.173 [2024-07-24 23:17:48.555732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.173 [2024-07-24 23:17:48.555844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.173 [2024-07-24 23:17:48.555969] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.173 [2024-07-24 23:17:48.555980] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.173 [2024-07-24 23:17:48.555989] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.173 [2024-07-24 23:17:48.557625] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.173 [2024-07-24 23:17:48.566368] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.173 [2024-07-24 23:17:48.566776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.173 [2024-07-24 23:17:48.567169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.173 [2024-07-24 23:17:48.567210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.173 [2024-07-24 23:17:48.567242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.173 [2024-07-24 23:17:48.567591] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.173 [2024-07-24 23:17:48.567657] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.173 [2024-07-24 23:17:48.567667] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.173 [2024-07-24 23:17:48.567676] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.173 [2024-07-24 23:17:48.569339] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.173 [2024-07-24 23:17:48.578243] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.173 [2024-07-24 23:17:48.578649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.173 [2024-07-24 23:17:48.579062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.173 [2024-07-24 23:17:48.579105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.173 [2024-07-24 23:17:48.579138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.173 [2024-07-24 23:17:48.579555] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.173 [2024-07-24 23:17:48.579667] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.173 [2024-07-24 23:17:48.579681] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.173 [2024-07-24 23:17:48.579691] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.173 [2024-07-24 23:17:48.581265] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.173 [2024-07-24 23:17:48.589974] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.173 [2024-07-24 23:17:48.590457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.173 [2024-07-24 23:17:48.590816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.173 [2024-07-24 23:17:48.590858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.174 [2024-07-24 23:17:48.590891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.174 [2024-07-24 23:17:48.591281] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.174 [2024-07-24 23:17:48.591621] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.174 [2024-07-24 23:17:48.591656] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.174 [2024-07-24 23:17:48.591687] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.174 [2024-07-24 23:17:48.593348] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.433 [2024-07-24 23:17:48.602032] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.433 [2024-07-24 23:17:48.602470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.433 [2024-07-24 23:17:48.602878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.433 [2024-07-24 23:17:48.602920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.433 [2024-07-24 23:17:48.602952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.433 [2024-07-24 23:17:48.603492] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.433 [2024-07-24 23:17:48.603691] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.433 [2024-07-24 23:17:48.603703] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.433 [2024-07-24 23:17:48.603712] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.433 [2024-07-24 23:17:48.605389] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.433 [2024-07-24 23:17:48.613785] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.433 [2024-07-24 23:17:48.614121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.433 [2024-07-24 23:17:48.614525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.433 [2024-07-24 23:17:48.614565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.433 [2024-07-24 23:17:48.614597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.433 [2024-07-24 23:17:48.614790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.433 [2024-07-24 23:17:48.614916] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.433 [2024-07-24 23:17:48.614927] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.433 [2024-07-24 23:17:48.614939] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.433 [2024-07-24 23:17:48.616498] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.433 [2024-07-24 23:17:48.625510] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.433 [2024-07-24 23:17:48.625972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.433 [2024-07-24 23:17:48.626367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.433 [2024-07-24 23:17:48.626408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.433 [2024-07-24 23:17:48.626440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.433 [2024-07-24 23:17:48.626590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.433 [2024-07-24 23:17:48.626707] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.433 [2024-07-24 23:17:48.626724] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.433 [2024-07-24 23:17:48.626735] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.433 [2024-07-24 23:17:48.628436] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.433 [2024-07-24 23:17:48.637309] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.433 [2024-07-24 23:17:48.637764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.433 [2024-07-24 23:17:48.638124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.433 [2024-07-24 23:17:48.638165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.434 [2024-07-24 23:17:48.638198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.434 [2024-07-24 23:17:48.638642] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.434 [2024-07-24 23:17:48.638812] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.434 [2024-07-24 23:17:48.638824] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.434 [2024-07-24 23:17:48.638832] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.434 [2024-07-24 23:17:48.640422] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.434 [2024-07-24 23:17:48.648979] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.434 [2024-07-24 23:17:48.649411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.434 [2024-07-24 23:17:48.649768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.434 [2024-07-24 23:17:48.649811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.434 [2024-07-24 23:17:48.649844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.434 [2024-07-24 23:17:48.650092] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.434 [2024-07-24 23:17:48.650162] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.434 [2024-07-24 23:17:48.650172] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.434 [2024-07-24 23:17:48.650182] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.434 [2024-07-24 23:17:48.651901] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.434 [2024-07-24 23:17:48.660880] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.434 [2024-07-24 23:17:48.661306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.434 [2024-07-24 23:17:48.661559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.434 [2024-07-24 23:17:48.661600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.434 [2024-07-24 23:17:48.661632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.434 [2024-07-24 23:17:48.662135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.434 [2024-07-24 23:17:48.662529] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.434 [2024-07-24 23:17:48.662564] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.434 [2024-07-24 23:17:48.662596] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.434 [2024-07-24 23:17:48.664398] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.434 [2024-07-24 23:17:48.672678] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.434 [2024-07-24 23:17:48.673142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.434 [2024-07-24 23:17:48.673391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.434 [2024-07-24 23:17:48.673431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.434 [2024-07-24 23:17:48.673464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.434 [2024-07-24 23:17:48.673803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.434 [2024-07-24 23:17:48.673937] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.434 [2024-07-24 23:17:48.673948] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.434 [2024-07-24 23:17:48.673958] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.434 [2024-07-24 23:17:48.675507] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.434 [2024-07-24 23:17:48.684365] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.434 [2024-07-24 23:17:48.684866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.434 [2024-07-24 23:17:48.685203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.434 [2024-07-24 23:17:48.685243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.434 [2024-07-24 23:17:48.685276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.434 [2024-07-24 23:17:48.685615] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.434 [2024-07-24 23:17:48.685809] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.434 [2024-07-24 23:17:48.685821] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.434 [2024-07-24 23:17:48.685830] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.434 [2024-07-24 23:17:48.687381] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.434 [2024-07-24 23:17:48.696316] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.434 [2024-07-24 23:17:48.696770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.434 [2024-07-24 23:17:48.697068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.434 [2024-07-24 23:17:48.697108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.434 [2024-07-24 23:17:48.697141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.434 [2024-07-24 23:17:48.697580] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.434 [2024-07-24 23:17:48.697746] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.434 [2024-07-24 23:17:48.697757] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.434 [2024-07-24 23:17:48.697767] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.434 [2024-07-24 23:17:48.699392] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.434 [2024-07-24 23:17:48.707913] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.434 [2024-07-24 23:17:48.708220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.434 [2024-07-24 23:17:48.708467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.434 [2024-07-24 23:17:48.708481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.434 [2024-07-24 23:17:48.708491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.434 [2024-07-24 23:17:48.708582] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.434 [2024-07-24 23:17:48.708661] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.434 [2024-07-24 23:17:48.708672] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.434 [2024-07-24 23:17:48.708680] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.434 [2024-07-24 23:17:48.710447] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.434 [2024-07-24 23:17:48.719770] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.434 [2024-07-24 23:17:48.720199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.434 [2024-07-24 23:17:48.720508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.434 [2024-07-24 23:17:48.720548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.434 [2024-07-24 23:17:48.720581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.434 [2024-07-24 23:17:48.720751] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.434 [2024-07-24 23:17:48.720857] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.434 [2024-07-24 23:17:48.720868] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.434 [2024-07-24 23:17:48.720877] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.434 [2024-07-24 23:17:48.722593] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.434 [2024-07-24 23:17:48.731748] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.434 [2024-07-24 23:17:48.732096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.434 [2024-07-24 23:17:48.732381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.434 [2024-07-24 23:17:48.732426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.434 [2024-07-24 23:17:48.732460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.434 [2024-07-24 23:17:48.732914] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.434 [2024-07-24 23:17:48.733031] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.434 [2024-07-24 23:17:48.733043] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.434 [2024-07-24 23:17:48.733052] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.434 [2024-07-24 23:17:48.734770] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.434 [2024-07-24 23:17:48.743522] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.434 [2024-07-24 23:17:48.743996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.435 [2024-07-24 23:17:48.744290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.435 [2024-07-24 23:17:48.744304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.435 [2024-07-24 23:17:48.744313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.435 [2024-07-24 23:17:48.744424] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.435 [2024-07-24 23:17:48.744521] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.435 [2024-07-24 23:17:48.744531] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.435 [2024-07-24 23:17:48.744540] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.435 [2024-07-24 23:17:48.746195] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.435 [2024-07-24 23:17:48.755191] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.435 [2024-07-24 23:17:48.755536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.435 [2024-07-24 23:17:48.755791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.435 [2024-07-24 23:17:48.755832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.435 [2024-07-24 23:17:48.755865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.435 [2024-07-24 23:17:48.756304] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.435 [2024-07-24 23:17:48.756756] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.435 [2024-07-24 23:17:48.756791] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.435 [2024-07-24 23:17:48.756822] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.435 [2024-07-24 23:17:48.758684] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.435 [2024-07-24 23:17:48.767110] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.435 [2024-07-24 23:17:48.767572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.435 [2024-07-24 23:17:48.767831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.435 [2024-07-24 23:17:48.767873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.435 [2024-07-24 23:17:48.767913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.435 [2024-07-24 23:17:48.768277] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.435 [2024-07-24 23:17:48.768344] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.435 [2024-07-24 23:17:48.768354] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.435 [2024-07-24 23:17:48.768362] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.435 [2024-07-24 23:17:48.769888] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.435 [2024-07-24 23:17:48.779164] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.435 [2024-07-24 23:17:48.779600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.435 [2024-07-24 23:17:48.779916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.435 [2024-07-24 23:17:48.779930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.435 [2024-07-24 23:17:48.779941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.435 [2024-07-24 23:17:48.780040] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.435 [2024-07-24 23:17:48.780139] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.435 [2024-07-24 23:17:48.780149] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.435 [2024-07-24 23:17:48.780159] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.435 [2024-07-24 23:17:48.781920] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.435 [2024-07-24 23:17:48.791232] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.435 [2024-07-24 23:17:48.791671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.435 [2024-07-24 23:17:48.791970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.435 [2024-07-24 23:17:48.791984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.435 [2024-07-24 23:17:48.791993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.435 [2024-07-24 23:17:48.792115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.435 [2024-07-24 23:17:48.792259] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.435 [2024-07-24 23:17:48.792271] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.435 [2024-07-24 23:17:48.792279] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.435 [2024-07-24 23:17:48.793886] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.435 [2024-07-24 23:17:48.803072] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.435 [2024-07-24 23:17:48.803516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.435 [2024-07-24 23:17:48.803880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.435 [2024-07-24 23:17:48.803923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.435 [2024-07-24 23:17:48.803968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.435 [2024-07-24 23:17:48.804082] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.435 [2024-07-24 23:17:48.804193] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.435 [2024-07-24 23:17:48.804204] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.435 [2024-07-24 23:17:48.804213] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.435 [2024-07-24 23:17:48.805699] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.435 [2024-07-24 23:17:48.814914] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.435 [2024-07-24 23:17:48.815346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.435 [2024-07-24 23:17:48.815635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.435 [2024-07-24 23:17:48.815675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.435 [2024-07-24 23:17:48.815709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.435 [2024-07-24 23:17:48.815946] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.435 [2024-07-24 23:17:48.816092] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.435 [2024-07-24 23:17:48.816103] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.435 [2024-07-24 23:17:48.816111] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.435 [2024-07-24 23:17:48.817557] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.435 [2024-07-24 23:17:48.826647] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.435 [2024-07-24 23:17:48.827060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.435 [2024-07-24 23:17:48.827367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.435 [2024-07-24 23:17:48.827409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.435 [2024-07-24 23:17:48.827441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.435 [2024-07-24 23:17:48.827894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.435 [2024-07-24 23:17:48.828240] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.435 [2024-07-24 23:17:48.828251] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.435 [2024-07-24 23:17:48.828260] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.435 [2024-07-24 23:17:48.829837] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.435 [2024-07-24 23:17:48.838404] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.435 [2024-07-24 23:17:48.838807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.435 [2024-07-24 23:17:48.839139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.435 [2024-07-24 23:17:48.839179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.435 [2024-07-24 23:17:48.839213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.436 [2024-07-24 23:17:48.839344] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.436 [2024-07-24 23:17:48.839451] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.436 [2024-07-24 23:17:48.839462] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.436 [2024-07-24 23:17:48.839471] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.436 [2024-07-24 23:17:48.841130] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.436 [2024-07-24 23:17:48.849968] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.436 [2024-07-24 23:17:48.850347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.436 [2024-07-24 23:17:48.850615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.436 [2024-07-24 23:17:48.850655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.436 [2024-07-24 23:17:48.850688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.436 [2024-07-24 23:17:48.850997] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.436 [2024-07-24 23:17:48.851082] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.436 [2024-07-24 23:17:48.851092] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.436 [2024-07-24 23:17:48.851102] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.436 [2024-07-24 23:17:48.852691] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.436 [2024-07-24 23:17:48.861826] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.695 [2024-07-24 23:17:48.862138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.695 [2024-07-24 23:17:48.862338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.695 [2024-07-24 23:17:48.862350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.695 [2024-07-24 23:17:48.862360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.695 [2024-07-24 23:17:48.862459] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.695 [2024-07-24 23:17:48.862616] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.695 [2024-07-24 23:17:48.862628] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.695 [2024-07-24 23:17:48.862638] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.695 [2024-07-24 23:17:48.864345] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.696 [2024-07-24 23:17:48.873722] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.696 [2024-07-24 23:17:48.874091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.696 [2024-07-24 23:17:48.874410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.696 [2024-07-24 23:17:48.874450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.696 [2024-07-24 23:17:48.874483] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.696 [2024-07-24 23:17:48.874837] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.696 [2024-07-24 23:17:48.875231] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.696 [2024-07-24 23:17:48.875261] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.696 [2024-07-24 23:17:48.875270] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.696 [2024-07-24 23:17:48.876931] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.696 [2024-07-24 23:17:48.885571] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.696 [2024-07-24 23:17:48.885992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.696 [2024-07-24 23:17:48.886239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.696 [2024-07-24 23:17:48.886253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.696 [2024-07-24 23:17:48.886264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.696 [2024-07-24 23:17:48.886407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.696 [2024-07-24 23:17:48.886550] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.696 [2024-07-24 23:17:48.886561] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.696 [2024-07-24 23:17:48.886571] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.696 [2024-07-24 23:17:48.888482] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.696 [2024-07-24 23:17:48.897385] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.696 [2024-07-24 23:17:48.897797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.696 [2024-07-24 23:17:48.898006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.696 [2024-07-24 23:17:48.898047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.696 [2024-07-24 23:17:48.898079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.696 [2024-07-24 23:17:48.898489] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.696 [2024-07-24 23:17:48.898589] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.696 [2024-07-24 23:17:48.898601] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.696 [2024-07-24 23:17:48.898611] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.696 [2024-07-24 23:17:48.900357] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.696 [2024-07-24 23:17:48.909157] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.696 [2024-07-24 23:17:48.909615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.696 [2024-07-24 23:17:48.909995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.696 [2024-07-24 23:17:48.910038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.696 [2024-07-24 23:17:48.910070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.696 [2024-07-24 23:17:48.910264] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.696 [2024-07-24 23:17:48.910349] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.696 [2024-07-24 23:17:48.910361] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.696 [2024-07-24 23:17:48.910376] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.696 [2024-07-24 23:17:48.912136] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.696 [2024-07-24 23:17:48.921275] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.696 [2024-07-24 23:17:48.921732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.696 [2024-07-24 23:17:48.922063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.696 [2024-07-24 23:17:48.922104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.696 [2024-07-24 23:17:48.922141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.696 [2024-07-24 23:17:48.922240] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.696 [2024-07-24 23:17:48.922354] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.696 [2024-07-24 23:17:48.922366] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.696 [2024-07-24 23:17:48.922375] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.696 [2024-07-24 23:17:48.924120] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.696 [2024-07-24 23:17:48.933179] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.696 [2024-07-24 23:17:48.933582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.696 [2024-07-24 23:17:48.933924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.696 [2024-07-24 23:17:48.933966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.696 [2024-07-24 23:17:48.933998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.696 [2024-07-24 23:17:48.934338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.696 [2024-07-24 23:17:48.934722] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.696 [2024-07-24 23:17:48.934734] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.696 [2024-07-24 23:17:48.934743] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.696 [2024-07-24 23:17:48.937216] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.696 [2024-07-24 23:17:48.945429] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.696 [2024-07-24 23:17:48.945808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.696 [2024-07-24 23:17:48.946167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.696 [2024-07-24 23:17:48.946207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.696 [2024-07-24 23:17:48.946239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.696 [2024-07-24 23:17:48.946402] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.696 [2024-07-24 23:17:48.946521] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.696 [2024-07-24 23:17:48.946532] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.696 [2024-07-24 23:17:48.946540] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.696 [2024-07-24 23:17:48.948082] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.696 [2024-07-24 23:17:48.957125] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.696 [2024-07-24 23:17:48.957477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.696 [2024-07-24 23:17:48.957683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.696 [2024-07-24 23:17:48.957741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.696 [2024-07-24 23:17:48.957775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.696 [2024-07-24 23:17:48.958164] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.696 [2024-07-24 23:17:48.958340] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.696 [2024-07-24 23:17:48.958351] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.696 [2024-07-24 23:17:48.958360] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.696 [2024-07-24 23:17:48.959867] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.696 [2024-07-24 23:17:48.968758] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.696 [2024-07-24 23:17:48.969119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.696 [2024-07-24 23:17:48.969334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.696 [2024-07-24 23:17:48.969374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.696 [2024-07-24 23:17:48.969407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.696 [2024-07-24 23:17:48.969696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.696 [2024-07-24 23:17:48.969853] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.696 [2024-07-24 23:17:48.969865] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.696 [2024-07-24 23:17:48.969874] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.696 [2024-07-24 23:17:48.971525] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.696 [2024-07-24 23:17:48.980526] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.696 [2024-07-24 23:17:48.980844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.696 [2024-07-24 23:17:48.981167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.696 [2024-07-24 23:17:48.981207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.697 [2024-07-24 23:17:48.981239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.697 [2024-07-24 23:17:48.981676] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.697 [2024-07-24 23:17:48.981842] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.697 [2024-07-24 23:17:48.981854] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.697 [2024-07-24 23:17:48.981864] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.697 [2024-07-24 23:17:48.983583] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.697 [2024-07-24 23:17:48.992260] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.697 [2024-07-24 23:17:48.992668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.697 [2024-07-24 23:17:48.993064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.697 [2024-07-24 23:17:48.993106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.697 [2024-07-24 23:17:48.993138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.697 [2024-07-24 23:17:48.993579] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.697 [2024-07-24 23:17:48.993822] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.697 [2024-07-24 23:17:48.993834] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.697 [2024-07-24 23:17:48.993843] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.697 [2024-07-24 23:17:48.995484] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.697 [2024-07-24 23:17:49.004009] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.697 [2024-07-24 23:17:49.004431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.697 [2024-07-24 23:17:49.004709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.697 [2024-07-24 23:17:49.004763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.697 [2024-07-24 23:17:49.004795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.697 [2024-07-24 23:17:49.005239] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.697 [2024-07-24 23:17:49.005398] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.697 [2024-07-24 23:17:49.005413] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.697 [2024-07-24 23:17:49.005426] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.697 [2024-07-24 23:17:49.007793] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.697 [2024-07-24 23:17:49.016148] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.697 [2024-07-24 23:17:49.016501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.697 [2024-07-24 23:17:49.016699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.697 [2024-07-24 23:17:49.016750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.697 [2024-07-24 23:17:49.016783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.697 [2024-07-24 23:17:49.017122] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.697 [2024-07-24 23:17:49.017517] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.697 [2024-07-24 23:17:49.017528] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.697 [2024-07-24 23:17:49.017536] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.697 [2024-07-24 23:17:49.019054] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.697 [2024-07-24 23:17:49.027895] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.697 [2024-07-24 23:17:49.028324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.697 [2024-07-24 23:17:49.028525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.697 [2024-07-24 23:17:49.028539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.697 [2024-07-24 23:17:49.028548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.697 [2024-07-24 23:17:49.028634] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.697 [2024-07-24 23:17:49.028782] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.697 [2024-07-24 23:17:49.028793] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.697 [2024-07-24 23:17:49.028802] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.697 [2024-07-24 23:17:49.030517] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.697 [2024-07-24 23:17:49.039856] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.697 [2024-07-24 23:17:49.040283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.697 [2024-07-24 23:17:49.040590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.697 [2024-07-24 23:17:49.040631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.697 [2024-07-24 23:17:49.040665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.697 [2024-07-24 23:17:49.040845] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.697 [2024-07-24 23:17:49.040961] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.697 [2024-07-24 23:17:49.040972] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.697 [2024-07-24 23:17:49.040982] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.697 [2024-07-24 23:17:49.042727] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.697 [2024-07-24 23:17:49.051710] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.697 [2024-07-24 23:17:49.052153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.697 [2024-07-24 23:17:49.052396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.697 [2024-07-24 23:17:49.052438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.697 [2024-07-24 23:17:49.052471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.697 [2024-07-24 23:17:49.052823] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.697 [2024-07-24 23:17:49.052965] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.697 [2024-07-24 23:17:49.052976] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.697 [2024-07-24 23:17:49.052986] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.697 [2024-07-24 23:17:49.054725] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.697 [2024-07-24 23:17:49.063315] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.697 [2024-07-24 23:17:49.063636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.697 [2024-07-24 23:17:49.063969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.697 [2024-07-24 23:17:49.064020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.697 [2024-07-24 23:17:49.064052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.697 [2024-07-24 23:17:49.064443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.697 [2024-07-24 23:17:49.064617] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.697 [2024-07-24 23:17:49.064629] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.697 [2024-07-24 23:17:49.064638] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.697 [2024-07-24 23:17:49.066332] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.697 [2024-07-24 23:17:49.075126] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.697 [2024-07-24 23:17:49.075503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.697 [2024-07-24 23:17:49.075855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.697 [2024-07-24 23:17:49.075898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.697 [2024-07-24 23:17:49.075930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.697 [2024-07-24 23:17:49.076053] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.697 [2024-07-24 23:17:49.076144] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.697 [2024-07-24 23:17:49.076154] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.697 [2024-07-24 23:17:49.076163] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.697 [2024-07-24 23:17:49.077883] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.697 [2024-07-24 23:17:49.087090] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.697 [2024-07-24 23:17:49.087534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.697 [2024-07-24 23:17:49.087780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.697 [2024-07-24 23:17:49.087795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.697 [2024-07-24 23:17:49.087805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.697 [2024-07-24 23:17:49.087919] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.697 [2024-07-24 23:17:49.088048] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.698 [2024-07-24 23:17:49.088059] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.698 [2024-07-24 23:17:49.088069] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.698 [2024-07-24 23:17:49.089796] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.698 [2024-07-24 23:17:49.098981] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.698 [2024-07-24 23:17:49.099295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.698 [2024-07-24 23:17:49.099498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.698 [2024-07-24 23:17:49.099511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.698 [2024-07-24 23:17:49.099524] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.698 [2024-07-24 23:17:49.099638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.698 [2024-07-24 23:17:49.099772] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.698 [2024-07-24 23:17:49.099784] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.698 [2024-07-24 23:17:49.099794] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.698 [2024-07-24 23:17:49.101448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.698 [2024-07-24 23:17:49.110845] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.698 [2024-07-24 23:17:49.111310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.698 [2024-07-24 23:17:49.111604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.698 [2024-07-24 23:17:49.111618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.698 [2024-07-24 23:17:49.111628] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.698 [2024-07-24 23:17:49.111719] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.698 [2024-07-24 23:17:49.111862] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.698 [2024-07-24 23:17:49.111873] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.698 [2024-07-24 23:17:49.111883] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.698 [2024-07-24 23:17:49.113383] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.698 [2024-07-24 23:17:49.122901] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.698 [2024-07-24 23:17:49.123276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.698 [2024-07-24 23:17:49.123570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.698 [2024-07-24 23:17:49.123584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.698 [2024-07-24 23:17:49.123593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.698 [2024-07-24 23:17:49.123693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.698 [2024-07-24 23:17:49.123783] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.698 [2024-07-24 23:17:49.123795] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.698 [2024-07-24 23:17:49.123804] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.957 [2024-07-24 23:17:49.125641] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.957 [2024-07-24 23:17:49.134682] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.957 [2024-07-24 23:17:49.135111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.957 [2024-07-24 23:17:49.135285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.957 [2024-07-24 23:17:49.135298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.957 [2024-07-24 23:17:49.135308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.957 [2024-07-24 23:17:49.135439] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.957 [2024-07-24 23:17:49.135510] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.958 [2024-07-24 23:17:49.135520] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.958 [2024-07-24 23:17:49.135529] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.958 [2024-07-24 23:17:49.137128] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.958 [2024-07-24 23:17:49.146685] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.958 [2024-07-24 23:17:49.147191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.958 [2024-07-24 23:17:49.147506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.958 [2024-07-24 23:17:49.147519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.958 [2024-07-24 23:17:49.147529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.958 [2024-07-24 23:17:49.147600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.958 [2024-07-24 23:17:49.147749] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.958 [2024-07-24 23:17:49.147760] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.958 [2024-07-24 23:17:49.147769] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.958 [2024-07-24 23:17:49.149566] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.958 [2024-07-24 23:17:49.158645] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.958 [2024-07-24 23:17:49.159031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.958 [2024-07-24 23:17:49.159274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.958 [2024-07-24 23:17:49.159288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.958 [2024-07-24 23:17:49.159298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.958 [2024-07-24 23:17:49.159426] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.958 [2024-07-24 23:17:49.159540] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.958 [2024-07-24 23:17:49.159552] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.958 [2024-07-24 23:17:49.159561] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.958 [2024-07-24 23:17:49.161320] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.958 [2024-07-24 23:17:49.170272] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.958 [2024-07-24 23:17:49.170721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.958 [2024-07-24 23:17:49.170968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.958 [2024-07-24 23:17:49.170982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.958 [2024-07-24 23:17:49.170992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.958 [2024-07-24 23:17:49.171120] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.958 [2024-07-24 23:17:49.171209] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.958 [2024-07-24 23:17:49.171219] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.958 [2024-07-24 23:17:49.171228] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.958 [2024-07-24 23:17:49.172873] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.958 [2024-07-24 23:17:49.182162] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.958 [2024-07-24 23:17:49.182565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.958 [2024-07-24 23:17:49.182857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.958 [2024-07-24 23:17:49.182871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.958 [2024-07-24 23:17:49.182882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.958 [2024-07-24 23:17:49.182997] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.958 [2024-07-24 23:17:49.183124] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.958 [2024-07-24 23:17:49.183135] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.958 [2024-07-24 23:17:49.183144] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.958 [2024-07-24 23:17:49.184818] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.958 [2024-07-24 23:17:49.193932] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.958 [2024-07-24 23:17:49.194296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.958 [2024-07-24 23:17:49.194535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.958 [2024-07-24 23:17:49.194549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.958 [2024-07-24 23:17:49.194558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.958 [2024-07-24 23:17:49.194672] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.958 [2024-07-24 23:17:49.194777] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.958 [2024-07-24 23:17:49.194788] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.958 [2024-07-24 23:17:49.194797] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.958 [2024-07-24 23:17:49.196508] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.958 [2024-07-24 23:17:49.205650] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.958 [2024-07-24 23:17:49.206010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.958 [2024-07-24 23:17:49.206309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.958 [2024-07-24 23:17:49.206322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.958 [2024-07-24 23:17:49.206332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.958 [2024-07-24 23:17:49.206488] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.958 [2024-07-24 23:17:49.206587] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.958 [2024-07-24 23:17:49.206601] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.958 [2024-07-24 23:17:49.206610] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.958 [2024-07-24 23:17:49.208284] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.958 [2024-07-24 23:17:49.217453] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.958 [2024-07-24 23:17:49.217924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.958 [2024-07-24 23:17:49.218241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.958 [2024-07-24 23:17:49.218255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.958 [2024-07-24 23:17:49.218265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.958 [2024-07-24 23:17:49.218393] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.958 [2024-07-24 23:17:49.218507] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.958 [2024-07-24 23:17:49.218519] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.958 [2024-07-24 23:17:49.218528] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.958 [2024-07-24 23:17:49.220329] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.958 [2024-07-24 23:17:49.229292] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.958 [2024-07-24 23:17:49.229767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.958 [2024-07-24 23:17:49.230023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.958 [2024-07-24 23:17:49.230038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.958 [2024-07-24 23:17:49.230048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.958 [2024-07-24 23:17:49.230162] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.958 [2024-07-24 23:17:49.230235] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.958 [2024-07-24 23:17:49.230245] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.958 [2024-07-24 23:17:49.230254] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.958 [2024-07-24 23:17:49.232043] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.958 [2024-07-24 23:17:49.241301] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.958 [2024-07-24 23:17:49.241646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.958 [2024-07-24 23:17:49.241848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.958 [2024-07-24 23:17:49.241862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.958 [2024-07-24 23:17:49.241872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.958 [2024-07-24 23:17:49.242000] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.958 [2024-07-24 23:17:49.242114] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.958 [2024-07-24 23:17:49.242126] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.958 [2024-07-24 23:17:49.242139] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.959 [2024-07-24 23:17:49.243696] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.959 [2024-07-24 23:17:49.253143] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.959 [2024-07-24 23:17:49.253505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.959 [2024-07-24 23:17:49.253883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.959 [2024-07-24 23:17:49.253925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.959 [2024-07-24 23:17:49.253957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.959 [2024-07-24 23:17:49.254139] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.959 [2024-07-24 23:17:49.254282] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.959 [2024-07-24 23:17:49.254294] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.959 [2024-07-24 23:17:49.254304] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.959 [2024-07-24 23:17:49.255962] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.959 [2024-07-24 23:17:49.265184] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.959 [2024-07-24 23:17:49.265617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.959 [2024-07-24 23:17:49.265971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.959 [2024-07-24 23:17:49.266013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.959 [2024-07-24 23:17:49.266046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.959 [2024-07-24 23:17:49.266151] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.959 [2024-07-24 23:17:49.266237] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.959 [2024-07-24 23:17:49.266249] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.959 [2024-07-24 23:17:49.266258] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.959 [2024-07-24 23:17:49.268106] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.959 [2024-07-24 23:17:49.276974] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.959 [2024-07-24 23:17:49.277409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.959 [2024-07-24 23:17:49.277710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.959 [2024-07-24 23:17:49.277767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.959 [2024-07-24 23:17:49.277799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.959 [2024-07-24 23:17:49.278059] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.959 [2024-07-24 23:17:49.278129] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.959 [2024-07-24 23:17:49.278139] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.959 [2024-07-24 23:17:49.278148] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.959 [2024-07-24 23:17:49.280623] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.959 [2024-07-24 23:17:49.289349] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.959 [2024-07-24 23:17:49.289700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.959 [2024-07-24 23:17:49.289951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.959 [2024-07-24 23:17:49.289965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.959 [2024-07-24 23:17:49.289975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.959 [2024-07-24 23:17:49.290103] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.959 [2024-07-24 23:17:49.290245] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.959 [2024-07-24 23:17:49.290256] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.959 [2024-07-24 23:17:49.290266] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.959 [2024-07-24 23:17:49.292043] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.959 [2024-07-24 23:17:49.301269] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.959 [2024-07-24 23:17:49.301732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.959 [2024-07-24 23:17:49.302002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.959 [2024-07-24 23:17:49.302044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.959 [2024-07-24 23:17:49.302076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.959 [2024-07-24 23:17:49.302266] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.959 [2024-07-24 23:17:49.302350] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.959 [2024-07-24 23:17:49.302361] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.959 [2024-07-24 23:17:49.302370] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.959 [2024-07-24 23:17:49.303975] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.959 [2024-07-24 23:17:49.313150] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.959 [2024-07-24 23:17:49.313581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.959 [2024-07-24 23:17:49.313913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.959 [2024-07-24 23:17:49.313956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.959 [2024-07-24 23:17:49.313989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.959 [2024-07-24 23:17:49.314192] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.959 [2024-07-24 23:17:49.314340] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.959 [2024-07-24 23:17:49.314352] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.959 [2024-07-24 23:17:49.314361] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.959 [2024-07-24 23:17:49.316018] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.959 [2024-07-24 23:17:49.324713] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.959 [2024-07-24 23:17:49.325086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.959 [2024-07-24 23:17:49.325465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.959 [2024-07-24 23:17:49.325506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.959 [2024-07-24 23:17:49.325538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.959 [2024-07-24 23:17:49.325944] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.959 [2024-07-24 23:17:49.326153] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.959 [2024-07-24 23:17:49.326165] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.959 [2024-07-24 23:17:49.326173] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.959 [2024-07-24 23:17:49.327793] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.959 [2024-07-24 23:17:49.336576] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.959 [2024-07-24 23:17:49.336998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.959 [2024-07-24 23:17:49.337256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.959 [2024-07-24 23:17:49.337297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.959 [2024-07-24 23:17:49.337330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.959 [2024-07-24 23:17:49.337661] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.959 [2024-07-24 23:17:49.337780] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.959 [2024-07-24 23:17:49.337792] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.959 [2024-07-24 23:17:49.337801] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.959 [2024-07-24 23:17:49.339428] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.959 [2024-07-24 23:17:49.348261] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.959 [2024-07-24 23:17:49.348723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.959 [2024-07-24 23:17:49.348972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.959 [2024-07-24 23:17:49.349014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.959 [2024-07-24 23:17:49.349046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.959 [2024-07-24 23:17:49.349336] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.959 [2024-07-24 23:17:49.349461] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.959 [2024-07-24 23:17:49.349472] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.959 [2024-07-24 23:17:49.349481] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.959 [2024-07-24 23:17:49.351214] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.960 [2024-07-24 23:17:49.360104] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.960 [2024-07-24 23:17:49.360543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.960 [2024-07-24 23:17:49.360930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.960 [2024-07-24 23:17:49.360974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.960 [2024-07-24 23:17:49.361006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.960 [2024-07-24 23:17:49.361199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.960 [2024-07-24 23:17:49.361319] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.960 [2024-07-24 23:17:49.361329] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.960 [2024-07-24 23:17:49.361338] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.960 [2024-07-24 23:17:49.362866] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.960 [2024-07-24 23:17:49.371841] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.960 [2024-07-24 23:17:49.372308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.960 [2024-07-24 23:17:49.372664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.960 [2024-07-24 23:17:49.372704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.960 [2024-07-24 23:17:49.372754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.960 [2024-07-24 23:17:49.373143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.960 [2024-07-24 23:17:49.373286] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.960 [2024-07-24 23:17:49.373298] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.960 [2024-07-24 23:17:49.373307] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.960 [2024-07-24 23:17:49.375059] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.960 [2024-07-24 23:17:49.383888] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.960 [2024-07-24 23:17:49.384303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.960 [2024-07-24 23:17:49.384542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.960 [2024-07-24 23:17:49.384555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:16.960 [2024-07-24 23:17:49.384565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:16.960 [2024-07-24 23:17:49.384649] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:16.960 [2024-07-24 23:17:49.384783] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.960 [2024-07-24 23:17:49.384794] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.960 [2024-07-24 23:17:49.384803] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.219 [2024-07-24 23:17:49.386443] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.219 [2024-07-24 23:17:49.395665] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.219 [2024-07-24 23:17:49.396088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.219 [2024-07-24 23:17:49.396334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.219 [2024-07-24 23:17:49.396376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.219 [2024-07-24 23:17:49.396416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.219 [2024-07-24 23:17:49.396823] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.219 [2024-07-24 23:17:49.397218] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.219 [2024-07-24 23:17:49.397253] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.219 [2024-07-24 23:17:49.397284] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.219 [2024-07-24 23:17:49.399317] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.219 [2024-07-24 23:17:49.407444] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.219 [2024-07-24 23:17:49.407797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.219 [2024-07-24 23:17:49.408117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.219 [2024-07-24 23:17:49.408157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.219 [2024-07-24 23:17:49.408190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.219 [2024-07-24 23:17:49.408593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.219 [2024-07-24 23:17:49.408699] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.219 [2024-07-24 23:17:49.408709] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.219 [2024-07-24 23:17:49.408724] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.219 [2024-07-24 23:17:49.410332] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.219 [2024-07-24 23:17:49.418983] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.219 [2024-07-24 23:17:49.419338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.219 [2024-07-24 23:17:49.419640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.219 [2024-07-24 23:17:49.419682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.219 [2024-07-24 23:17:49.419729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.219 [2024-07-24 23:17:49.420170] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.219 [2024-07-24 23:17:49.420562] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.219 [2024-07-24 23:17:49.420596] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.219 [2024-07-24 23:17:49.420627] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.220 [2024-07-24 23:17:49.422338] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.220 [2024-07-24 23:17:49.430692] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.220 [2024-07-24 23:17:49.431123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.220 [2024-07-24 23:17:49.431442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.220 [2024-07-24 23:17:49.431484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.220 [2024-07-24 23:17:49.431516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.220 [2024-07-24 23:17:49.431864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.220 [2024-07-24 23:17:49.431944] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.220 [2024-07-24 23:17:49.431955] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.220 [2024-07-24 23:17:49.431964] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.220 [2024-07-24 23:17:49.433539] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.220 [2024-07-24 23:17:49.442341] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.220 [2024-07-24 23:17:49.442771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.220 [2024-07-24 23:17:49.443012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.220 [2024-07-24 23:17:49.443025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.220 [2024-07-24 23:17:49.443069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.220 [2024-07-24 23:17:49.443407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.220 [2024-07-24 23:17:49.443646] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.220 [2024-07-24 23:17:49.443657] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.220 [2024-07-24 23:17:49.443666] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.220 [2024-07-24 23:17:49.445278] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.220 [2024-07-24 23:17:49.454049] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.220 [2024-07-24 23:17:49.454506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.220 [2024-07-24 23:17:49.454830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.220 [2024-07-24 23:17:49.454873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.220 [2024-07-24 23:17:49.454905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.220 [2024-07-24 23:17:49.455197] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.220 [2024-07-24 23:17:49.455576] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.220 [2024-07-24 23:17:49.455587] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.220 [2024-07-24 23:17:49.455596] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.220 [2024-07-24 23:17:49.457223] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.220 [2024-07-24 23:17:49.465580] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.220 [2024-07-24 23:17:49.466016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.220 [2024-07-24 23:17:49.466388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.220 [2024-07-24 23:17:49.466428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.220 [2024-07-24 23:17:49.466461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.220 [2024-07-24 23:17:49.466608] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.220 [2024-07-24 23:17:49.466692] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.220 [2024-07-24 23:17:49.466702] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.220 [2024-07-24 23:17:49.466711] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.220 [2024-07-24 23:17:49.468312] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.220 [2024-07-24 23:17:49.477293] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.220 [2024-07-24 23:17:49.477739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.220 [2024-07-24 23:17:49.478116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.220 [2024-07-24 23:17:49.478157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.220 [2024-07-24 23:17:49.478189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.220 [2024-07-24 23:17:49.478581] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.220 [2024-07-24 23:17:49.478704] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.220 [2024-07-24 23:17:49.478721] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.220 [2024-07-24 23:17:49.478730] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.220 [2024-07-24 23:17:49.480202] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.220 [2024-07-24 23:17:49.489023] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.220 [2024-07-24 23:17:49.489504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.220 [2024-07-24 23:17:49.489816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.220 [2024-07-24 23:17:49.489829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.220 [2024-07-24 23:17:49.489838] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.220 [2024-07-24 23:17:49.489957] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.220 [2024-07-24 23:17:49.490063] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.220 [2024-07-24 23:17:49.490074] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.220 [2024-07-24 23:17:49.490083] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.220 [2024-07-24 23:17:49.491755] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.220 [2024-07-24 23:17:49.500870] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.220 [2024-07-24 23:17:49.501326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.220 [2024-07-24 23:17:49.501582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.220 [2024-07-24 23:17:49.501622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.220 [2024-07-24 23:17:49.501655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.220 [2024-07-24 23:17:49.501844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.220 [2024-07-24 23:17:49.501941] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.220 [2024-07-24 23:17:49.501955] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.220 [2024-07-24 23:17:49.501965] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.220 [2024-07-24 23:17:49.503599] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.220 [2024-07-24 23:17:49.512556] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.220 [2024-07-24 23:17:49.512997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.220 [2024-07-24 23:17:49.513373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.220 [2024-07-24 23:17:49.513414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.220 [2024-07-24 23:17:49.513446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.220 [2024-07-24 23:17:49.513932] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.220 [2024-07-24 23:17:49.514072] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.220 [2024-07-24 23:17:49.514084] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.220 [2024-07-24 23:17:49.514093] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.220 [2024-07-24 23:17:49.515737] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.220 [2024-07-24 23:17:49.524342] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.221 [2024-07-24 23:17:49.524655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.221 [2024-07-24 23:17:49.525002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.221 [2024-07-24 23:17:49.525044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.221 [2024-07-24 23:17:49.525077] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.221 [2024-07-24 23:17:49.525466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.221 [2024-07-24 23:17:49.525700] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.221 [2024-07-24 23:17:49.525711] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.221 [2024-07-24 23:17:49.525725] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.221 [2024-07-24 23:17:49.527287] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.221 [2024-07-24 23:17:49.536151] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.221 [2024-07-24 23:17:49.536631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.221 [2024-07-24 23:17:49.536943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.221 [2024-07-24 23:17:49.536958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.221 [2024-07-24 23:17:49.536968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.221 [2024-07-24 23:17:49.537053] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.221 [2024-07-24 23:17:49.537178] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.221 [2024-07-24 23:17:49.537188] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.221 [2024-07-24 23:17:49.537200] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.221 [2024-07-24 23:17:49.538959] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.221 [2024-07-24 23:17:49.547995] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.221 [2024-07-24 23:17:49.548462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.221 [2024-07-24 23:17:49.548840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.221 [2024-07-24 23:17:49.548883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.221 [2024-07-24 23:17:49.548915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.221 [2024-07-24 23:17:49.549305] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.221 [2024-07-24 23:17:49.549796] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.221 [2024-07-24 23:17:49.549808] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.221 [2024-07-24 23:17:49.549818] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.221 [2024-07-24 23:17:49.551471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.221 [2024-07-24 23:17:49.560029] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.221 [2024-07-24 23:17:49.560495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.221 [2024-07-24 23:17:49.560770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.221 [2024-07-24 23:17:49.560812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.221 [2024-07-24 23:17:49.560844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.221 [2024-07-24 23:17:49.561184] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.221 [2024-07-24 23:17:49.561525] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.221 [2024-07-24 23:17:49.561560] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.221 [2024-07-24 23:17:49.561592] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.221 [2024-07-24 23:17:49.564181] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.221 [2024-07-24 23:17:49.572258] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.221 [2024-07-24 23:17:49.572695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.221 [2024-07-24 23:17:49.572959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.221 [2024-07-24 23:17:49.572972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.221 [2024-07-24 23:17:49.572982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.221 [2024-07-24 23:17:49.573065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.221 [2024-07-24 23:17:49.573161] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.221 [2024-07-24 23:17:49.573172] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.221 [2024-07-24 23:17:49.573181] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.221 [2024-07-24 23:17:49.574859] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.221 [2024-07-24 23:17:49.584008] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.221 [2024-07-24 23:17:49.584469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.221 [2024-07-24 23:17:49.584735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.221 [2024-07-24 23:17:49.584778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.221 [2024-07-24 23:17:49.584810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.221 [2024-07-24 23:17:49.585150] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.221 [2024-07-24 23:17:49.585304] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.221 [2024-07-24 23:17:49.585315] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.221 [2024-07-24 23:17:49.585325] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.221 [2024-07-24 23:17:49.586853] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.221 [2024-07-24 23:17:49.595742] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.221 [2024-07-24 23:17:49.596170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.221 [2024-07-24 23:17:49.596537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.221 [2024-07-24 23:17:49.596577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.221 [2024-07-24 23:17:49.596610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.221 [2024-07-24 23:17:49.596828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.221 [2024-07-24 23:17:49.596954] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.221 [2024-07-24 23:17:49.596965] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.221 [2024-07-24 23:17:49.596974] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.221 [2024-07-24 23:17:49.598514] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.221 [2024-07-24 23:17:49.607513] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.221 [2024-07-24 23:17:49.607950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.221 [2024-07-24 23:17:49.608325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.221 [2024-07-24 23:17:49.608366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.221 [2024-07-24 23:17:49.608398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.221 [2024-07-24 23:17:49.608854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.221 [2024-07-24 23:17:49.609052] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.221 [2024-07-24 23:17:49.609063] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.221 [2024-07-24 23:17:49.609072] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.221 [2024-07-24 23:17:49.610608] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.221 [2024-07-24 23:17:49.619257] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.221 [2024-07-24 23:17:49.619724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.221 [2024-07-24 23:17:49.619966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.221 [2024-07-24 23:17:49.620006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.221 [2024-07-24 23:17:49.620039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.221 [2024-07-24 23:17:49.620480] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.221 [2024-07-24 23:17:49.620991] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.221 [2024-07-24 23:17:49.621027] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.221 [2024-07-24 23:17:49.621057] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.221 [2024-07-24 23:17:49.622687] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.221 [2024-07-24 23:17:49.630967] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.221 [2024-07-24 23:17:49.631418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.221 [2024-07-24 23:17:49.631677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.221 [2024-07-24 23:17:49.631732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.222 [2024-07-24 23:17:49.631766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.222 [2024-07-24 23:17:49.631953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.222 [2024-07-24 23:17:49.632072] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.222 [2024-07-24 23:17:49.632087] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.222 [2024-07-24 23:17:49.632100] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.222 [2024-07-24 23:17:49.634476] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.222 [2024-07-24 23:17:49.643282] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.222 [2024-07-24 23:17:49.643763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.222 [2024-07-24 23:17:49.644156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.222 [2024-07-24 23:17:49.644203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.222 [2024-07-24 23:17:49.644213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.222 [2024-07-24 23:17:49.644298] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.222 [2024-07-24 23:17:49.644427] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.222 [2024-07-24 23:17:49.644437] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.222 [2024-07-24 23:17:49.644447] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.222 [2024-07-24 23:17:49.646235] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.481 [2024-07-24 23:17:49.655164] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.481 [2024-07-24 23:17:49.655665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.481 [2024-07-24 23:17:49.656056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.481 [2024-07-24 23:17:49.656098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.481 [2024-07-24 23:17:49.656130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.481 [2024-07-24 23:17:49.656469] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.481 [2024-07-24 23:17:49.656825] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.481 [2024-07-24 23:17:49.656861] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.481 [2024-07-24 23:17:49.656892] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.481 [2024-07-24 23:17:49.658622] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.481 [2024-07-24 23:17:49.666996] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.481 [2024-07-24 23:17:49.667430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.481 [2024-07-24 23:17:49.667741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.481 [2024-07-24 23:17:49.667783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.481 [2024-07-24 23:17:49.667816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.481 [2024-07-24 23:17:49.667983] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.481 [2024-07-24 23:17:49.668075] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.481 [2024-07-24 23:17:49.668086] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.481 [2024-07-24 23:17:49.668095] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.481 [2024-07-24 23:17:49.669667] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.481 [2024-07-24 23:17:49.678733] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.481 [2024-07-24 23:17:49.679186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.481 [2024-07-24 23:17:49.679495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.481 [2024-07-24 23:17:49.679535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.481 [2024-07-24 23:17:49.679567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.481 [2024-07-24 23:17:49.679923] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.481 [2024-07-24 23:17:49.680176] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.481 [2024-07-24 23:17:49.680188] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.481 [2024-07-24 23:17:49.680198] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.481 [2024-07-24 23:17:49.681687] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.481 [2024-07-24 23:17:49.690528] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.481 [2024-07-24 23:17:49.690984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.481 [2024-07-24 23:17:49.691283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.481 [2024-07-24 23:17:49.691331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.481 [2024-07-24 23:17:49.691364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.481 [2024-07-24 23:17:49.691769] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.481 [2024-07-24 23:17:49.692063] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.481 [2024-07-24 23:17:49.692098] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.481 [2024-07-24 23:17:49.692129] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.481 [2024-07-24 23:17:49.694244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.481 [2024-07-24 23:17:49.703159] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.481 [2024-07-24 23:17:49.703555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.481 [2024-07-24 23:17:49.703917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.481 [2024-07-24 23:17:49.703959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.481 [2024-07-24 23:17:49.703992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.481 [2024-07-24 23:17:49.704284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.481 [2024-07-24 23:17:49.704491] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.481 [2024-07-24 23:17:49.704502] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.481 [2024-07-24 23:17:49.704511] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.481 [2024-07-24 23:17:49.706359] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.481 [2024-07-24 23:17:49.714923] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.481 [2024-07-24 23:17:49.715218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.481 [2024-07-24 23:17:49.715460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.481 [2024-07-24 23:17:49.715502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.481 [2024-07-24 23:17:49.715535] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.481 [2024-07-24 23:17:49.715893] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.481 [2024-07-24 23:17:49.716288] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.481 [2024-07-24 23:17:49.716322] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.481 [2024-07-24 23:17:49.716355] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.481 [2024-07-24 23:17:49.718183] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.481 [2024-07-24 23:17:49.726843] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.481 [2024-07-24 23:17:49.727266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.481 [2024-07-24 23:17:49.727562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.481 [2024-07-24 23:17:49.727603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.481 [2024-07-24 23:17:49.727644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.481 [2024-07-24 23:17:49.728003] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.481 [2024-07-24 23:17:49.728152] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.481 [2024-07-24 23:17:49.728163] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.481 [2024-07-24 23:17:49.728172] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.481 [2024-07-24 23:17:49.729759] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.481 [2024-07-24 23:17:49.738364] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.481 [2024-07-24 23:17:49.738800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.481 [2024-07-24 23:17:49.739109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.481 [2024-07-24 23:17:49.739150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.481 [2024-07-24 23:17:49.739182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.481 [2024-07-24 23:17:49.739346] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.482 [2024-07-24 23:17:49.739451] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.482 [2024-07-24 23:17:49.739462] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.482 [2024-07-24 23:17:49.739471] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.482 [2024-07-24 23:17:49.741123] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.482 [2024-07-24 23:17:49.750096] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.482 [2024-07-24 23:17:49.750533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.482 [2024-07-24 23:17:49.750934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.482 [2024-07-24 23:17:49.750977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.482 [2024-07-24 23:17:49.751010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.482 [2024-07-24 23:17:49.751177] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.482 [2024-07-24 23:17:49.751323] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.482 [2024-07-24 23:17:49.751334] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.482 [2024-07-24 23:17:49.751343] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.482 [2024-07-24 23:17:49.752873] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.482 [2024-07-24 23:17:49.761737] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.482 [2024-07-24 23:17:49.762181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.482 [2024-07-24 23:17:49.762478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.482 [2024-07-24 23:17:49.762519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.482 [2024-07-24 23:17:49.762553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.482 [2024-07-24 23:17:49.762824] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.482 [2024-07-24 23:17:49.762921] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.482 [2024-07-24 23:17:49.762932] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.482 [2024-07-24 23:17:49.762941] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.482 [2024-07-24 23:17:49.764668] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.482 [2024-07-24 23:17:49.773579] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.482 [2024-07-24 23:17:49.774051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.482 [2024-07-24 23:17:49.774305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.482 [2024-07-24 23:17:49.774346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.482 [2024-07-24 23:17:49.774378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.482 [2024-07-24 23:17:49.774535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.482 [2024-07-24 23:17:49.774628] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.482 [2024-07-24 23:17:49.774638] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.482 [2024-07-24 23:17:49.774647] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.482 [2024-07-24 23:17:49.776381] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.482 [2024-07-24 23:17:49.785371] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.482 [2024-07-24 23:17:49.785795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.482 [2024-07-24 23:17:49.786101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.482 [2024-07-24 23:17:49.786141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.482 [2024-07-24 23:17:49.786174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.482 [2024-07-24 23:17:49.786303] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.482 [2024-07-24 23:17:49.786408] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.482 [2024-07-24 23:17:49.786418] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.482 [2024-07-24 23:17:49.786426] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.482 [2024-07-24 23:17:49.788204] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.482 [2024-07-24 23:17:49.797125] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.482 [2024-07-24 23:17:49.797523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.482 [2024-07-24 23:17:49.797719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.482 [2024-07-24 23:17:49.797734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.482 [2024-07-24 23:17:49.797745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.482 [2024-07-24 23:17:49.797882] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.482 [2024-07-24 23:17:49.797943] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.482 [2024-07-24 23:17:49.797954] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.482 [2024-07-24 23:17:49.797964] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.482 [2024-07-24 23:17:49.799608] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.482 [2024-07-24 23:17:49.808932] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.482 [2024-07-24 23:17:49.809389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.482 [2024-07-24 23:17:49.809572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.482 [2024-07-24 23:17:49.809612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.482 [2024-07-24 23:17:49.809644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.482 [2024-07-24 23:17:49.810055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.482 [2024-07-24 23:17:49.810555] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.482 [2024-07-24 23:17:49.810566] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.482 [2024-07-24 23:17:49.810575] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.482 [2024-07-24 23:17:49.812206] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.482 [2024-07-24 23:17:49.820750] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.482 [2024-07-24 23:17:49.821195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.482 [2024-07-24 23:17:49.821451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.482 [2024-07-24 23:17:49.821492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.482 [2024-07-24 23:17:49.821524] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.482 [2024-07-24 23:17:49.822031] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.482 [2024-07-24 23:17:49.822492] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.482 [2024-07-24 23:17:49.822524] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.482 [2024-07-24 23:17:49.822534] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.482 [2024-07-24 23:17:49.824082] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.482 [2024-07-24 23:17:49.832556] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.482 [2024-07-24 23:17:49.832929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.482 [2024-07-24 23:17:49.833051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.482 [2024-07-24 23:17:49.833091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.482 [2024-07-24 23:17:49.833124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.482 [2024-07-24 23:17:49.833520] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.482 [2024-07-24 23:17:49.833685] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.482 [2024-07-24 23:17:49.833698] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.482 [2024-07-24 23:17:49.833707] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.482 [2024-07-24 23:17:49.835423] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.482 [2024-07-24 23:17:49.844171] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.482 [2024-07-24 23:17:49.844610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.482 [2024-07-24 23:17:49.844770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.482 [2024-07-24 23:17:49.844783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.482 [2024-07-24 23:17:49.844793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.482 [2024-07-24 23:17:49.844858] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.482 [2024-07-24 23:17:49.844936] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.482 [2024-07-24 23:17:49.844946] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.482 [2024-07-24 23:17:49.844955] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.482 [2024-07-24 23:17:49.846576] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.483 [2024-07-24 23:17:49.856034] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.483 [2024-07-24 23:17:49.856496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.483 [2024-07-24 23:17:49.856770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.483 [2024-07-24 23:17:49.856813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.483 [2024-07-24 23:17:49.856845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.483 [2024-07-24 23:17:49.857135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.483 [2024-07-24 23:17:49.857477] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.483 [2024-07-24 23:17:49.857511] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.483 [2024-07-24 23:17:49.857543] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.483 [2024-07-24 23:17:49.859261] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.483 [2024-07-24 23:17:49.867792] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.483 [2024-07-24 23:17:49.868288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.483 [2024-07-24 23:17:49.868668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.483 [2024-07-24 23:17:49.868708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.483 [2024-07-24 23:17:49.868756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.483 [2024-07-24 23:17:49.868945] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.483 [2024-07-24 23:17:49.869057] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.483 [2024-07-24 23:17:49.869068] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.483 [2024-07-24 23:17:49.869080] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.483 [2024-07-24 23:17:49.870752] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.483 [2024-07-24 23:17:49.879501] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.483 [2024-07-24 23:17:49.879925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.483 [2024-07-24 23:17:49.880248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.483 [2024-07-24 23:17:49.880289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.483 [2024-07-24 23:17:49.880322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.483 [2024-07-24 23:17:49.880662] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.483 [2024-07-24 23:17:49.880910] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.483 [2024-07-24 23:17:49.880922] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.483 [2024-07-24 23:17:49.880931] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.483 [2024-07-24 23:17:49.882633] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.483 [2024-07-24 23:17:49.891194] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.483 [2024-07-24 23:17:49.891568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.483 [2024-07-24 23:17:49.891890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.483 [2024-07-24 23:17:49.891932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.483 [2024-07-24 23:17:49.891965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.483 [2024-07-24 23:17:49.892255] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.483 [2024-07-24 23:17:49.892647] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.483 [2024-07-24 23:17:49.892682] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.483 [2024-07-24 23:17:49.892713] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.483 [2024-07-24 23:17:49.894488] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.483 [2024-07-24 23:17:49.903070] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.483 [2024-07-24 23:17:49.903519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.483 [2024-07-24 23:17:49.903874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.483 [2024-07-24 23:17:49.903916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.483 [2024-07-24 23:17:49.903948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.483 [2024-07-24 23:17:49.904109] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.483 [2024-07-24 23:17:49.904201] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.483 [2024-07-24 23:17:49.904213] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.483 [2024-07-24 23:17:49.904221] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.483 [2024-07-24 23:17:49.905814] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.745 [2024-07-24 23:17:49.914763] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.745 [2024-07-24 23:17:49.915237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.745 [2024-07-24 23:17:49.915593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.745 [2024-07-24 23:17:49.915638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.745 [2024-07-24 23:17:49.915648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.745 [2024-07-24 23:17:49.915798] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.745 [2024-07-24 23:17:49.915927] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.745 [2024-07-24 23:17:49.915938] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.745 [2024-07-24 23:17:49.915948] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.745 [2024-07-24 23:17:49.917638] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.745 [2024-07-24 23:17:49.926625] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.745 [2024-07-24 23:17:49.927068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.745 [2024-07-24 23:17:49.927385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.745 [2024-07-24 23:17:49.927398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.745 [2024-07-24 23:17:49.927408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.745 [2024-07-24 23:17:49.927522] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.745 [2024-07-24 23:17:49.927650] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.745 [2024-07-24 23:17:49.927662] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.745 [2024-07-24 23:17:49.927672] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.746 [2024-07-24 23:17:49.929533] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.746 [2024-07-24 23:17:49.938381] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.746 [2024-07-24 23:17:49.938858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.746 [2024-07-24 23:17:49.939239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.746 [2024-07-24 23:17:49.939279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.746 [2024-07-24 23:17:49.939311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.746 [2024-07-24 23:17:49.939551] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.746 [2024-07-24 23:17:49.939946] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.746 [2024-07-24 23:17:49.939958] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.746 [2024-07-24 23:17:49.939967] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.746 [2024-07-24 23:17:49.941733] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.746 [2024-07-24 23:17:49.950116] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.746 [2024-07-24 23:17:49.950594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.746 [2024-07-24 23:17:49.950951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.746 [2024-07-24 23:17:49.950994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.746 [2024-07-24 23:17:49.951026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.746 [2024-07-24 23:17:49.951369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.746 [2024-07-24 23:17:49.951712] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.746 [2024-07-24 23:17:49.951756] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.746 [2024-07-24 23:17:49.951787] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.746 [2024-07-24 23:17:49.953697] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.746 [2024-07-24 23:17:49.961897] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.746 [2024-07-24 23:17:49.962358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.746 [2024-07-24 23:17:49.962740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.746 [2024-07-24 23:17:49.962782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.746 [2024-07-24 23:17:49.962816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.746 [2024-07-24 23:17:49.963003] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.746 [2024-07-24 23:17:49.963136] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.746 [2024-07-24 23:17:49.963147] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.746 [2024-07-24 23:17:49.963156] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.746 [2024-07-24 23:17:49.964626] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.746 [2024-07-24 23:17:49.973559] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.746 [2024-07-24 23:17:49.974014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.746 [2024-07-24 23:17:49.974390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.746 [2024-07-24 23:17:49.974430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.746 [2024-07-24 23:17:49.974463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.746 [2024-07-24 23:17:49.974888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.746 [2024-07-24 23:17:49.975096] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.746 [2024-07-24 23:17:49.975107] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.746 [2024-07-24 23:17:49.975117] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.746 [2024-07-24 23:17:49.976838] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.746 [2024-07-24 23:17:49.985362] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.746 [2024-07-24 23:17:49.985767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.746 [2024-07-24 23:17:49.986071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.746 [2024-07-24 23:17:49.986112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.746 [2024-07-24 23:17:49.986144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.746 [2024-07-24 23:17:49.986484] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.746 [2024-07-24 23:17:49.986842] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.746 [2024-07-24 23:17:49.986879] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.746 [2024-07-24 23:17:49.986910] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.746 [2024-07-24 23:17:49.988706] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.746 [2024-07-24 23:17:49.997033] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.746 [2024-07-24 23:17:49.997494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.746 [2024-07-24 23:17:49.997868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.746 [2024-07-24 23:17:49.997910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.746 [2024-07-24 23:17:49.997942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.746 [2024-07-24 23:17:49.998078] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.746 [2024-07-24 23:17:49.998157] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.746 [2024-07-24 23:17:49.998168] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.746 [2024-07-24 23:17:49.998177] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.746 [2024-07-24 23:17:49.999771] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.746 [2024-07-24 23:17:50.008918] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.746 [2024-07-24 23:17:50.009361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.746 [2024-07-24 23:17:50.009677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.746 [2024-07-24 23:17:50.009690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.746 [2024-07-24 23:17:50.009700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.746 [2024-07-24 23:17:50.009806] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.746 [2024-07-24 23:17:50.009919] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.746 [2024-07-24 23:17:50.009930] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.746 [2024-07-24 23:17:50.009939] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.746 [2024-07-24 23:17:50.011695] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.746 [2024-07-24 23:17:50.020812] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.746 [2024-07-24 23:17:50.021265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.746 [2024-07-24 23:17:50.021604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.746 [2024-07-24 23:17:50.021618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.746 [2024-07-24 23:17:50.021633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.746 [2024-07-24 23:17:50.021770] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.746 [2024-07-24 23:17:50.021885] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.746 [2024-07-24 23:17:50.021896] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.746 [2024-07-24 23:17:50.021906] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.746 [2024-07-24 23:17:50.023579] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.746 [2024-07-24 23:17:50.032777] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.746 [2024-07-24 23:17:50.033201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.746 [2024-07-24 23:17:50.033463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.746 [2024-07-24 23:17:50.033476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.746 [2024-07-24 23:17:50.033486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.746 [2024-07-24 23:17:50.033611] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.746 [2024-07-24 23:17:50.033728] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.746 [2024-07-24 23:17:50.033739] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.746 [2024-07-24 23:17:50.033765] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.746 [2024-07-24 23:17:50.035458] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.747 [2024-07-24 23:17:50.044619] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.747 [2024-07-24 23:17:50.044999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.747 [2024-07-24 23:17:50.045315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.747 [2024-07-24 23:17:50.045329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.747 [2024-07-24 23:17:50.045339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.747 [2024-07-24 23:17:50.045453] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.747 [2024-07-24 23:17:50.045539] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.747 [2024-07-24 23:17:50.045549] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.747 [2024-07-24 23:17:50.045558] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.747 [2024-07-24 23:17:50.047319] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.747 [2024-07-24 23:17:50.056568] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.747 [2024-07-24 23:17:50.056946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.747 [2024-07-24 23:17:50.057265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.747 [2024-07-24 23:17:50.057279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.747 [2024-07-24 23:17:50.057289] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.747 [2024-07-24 23:17:50.057408] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.747 [2024-07-24 23:17:50.057550] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.747 [2024-07-24 23:17:50.057561] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.747 [2024-07-24 23:17:50.057571] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.747 [2024-07-24 23:17:50.059290] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.747 [2024-07-24 23:17:50.068504] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.747 [2024-07-24 23:17:50.068908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.747 [2024-07-24 23:17:50.069201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.747 [2024-07-24 23:17:50.069215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.747 [2024-07-24 23:17:50.069225] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.747 [2024-07-24 23:17:50.069364] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.747 [2024-07-24 23:17:50.069489] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.747 [2024-07-24 23:17:50.069500] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.747 [2024-07-24 23:17:50.069510] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.747 [2024-07-24 23:17:50.071156] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.747 [2024-07-24 23:17:50.080504] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.747 [2024-07-24 23:17:50.080933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.747 [2024-07-24 23:17:50.081200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.747 [2024-07-24 23:17:50.081224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.747 [2024-07-24 23:17:50.081239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.747 [2024-07-24 23:17:50.081490] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.747 [2024-07-24 23:17:50.081774] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.747 [2024-07-24 23:17:50.081847] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.747 [2024-07-24 23:17:50.081881] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.747 [2024-07-24 23:17:50.084707] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.747 [2024-07-24 23:17:50.092403] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.747 [2024-07-24 23:17:50.092741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.747 [2024-07-24 23:17:50.093061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.747 [2024-07-24 23:17:50.093074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.747 [2024-07-24 23:17:50.093085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.747 [2024-07-24 23:17:50.093211] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.747 [2024-07-24 23:17:50.093370] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.747 [2024-07-24 23:17:50.093381] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.747 [2024-07-24 23:17:50.093390] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.747 [2024-07-24 23:17:50.095073] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.747 [2024-07-24 23:17:50.104278] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.747 [2024-07-24 23:17:50.104596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.747 [2024-07-24 23:17:50.104843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.747 [2024-07-24 23:17:50.104858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.747 [2024-07-24 23:17:50.104868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.747 [2024-07-24 23:17:50.104968] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.747 [2024-07-24 23:17:50.105051] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.747 [2024-07-24 23:17:50.105061] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.747 [2024-07-24 23:17:50.105070] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.747 [2024-07-24 23:17:50.106637] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.747 [2024-07-24 23:17:50.116079] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.747 [2024-07-24 23:17:50.116553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.747 [2024-07-24 23:17:50.116856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.747 [2024-07-24 23:17:50.116900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.747 [2024-07-24 23:17:50.116932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.747 [2024-07-24 23:17:50.117109] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.747 [2024-07-24 23:17:50.117189] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.747 [2024-07-24 23:17:50.117200] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.747 [2024-07-24 23:17:50.117209] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.747 [2024-07-24 23:17:50.119225] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.747 [2024-07-24 23:17:50.128315] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.747 [2024-07-24 23:17:50.128741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.747 [2024-07-24 23:17:50.129042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.747 [2024-07-24 23:17:50.129082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.747 [2024-07-24 23:17:50.129116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.747 [2024-07-24 23:17:50.129555] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.747 [2024-07-24 23:17:50.129915] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.747 [2024-07-24 23:17:50.129959] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.747 [2024-07-24 23:17:50.129992] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.747 [2024-07-24 23:17:50.131731] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.747 [2024-07-24 23:17:50.140107] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.747 [2024-07-24 23:17:50.140532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.747 [2024-07-24 23:17:50.140783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.747 [2024-07-24 23:17:50.140826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.747 [2024-07-24 23:17:50.140860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.747 [2024-07-24 23:17:50.141159] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.747 [2024-07-24 23:17:50.141239] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.747 [2024-07-24 23:17:50.141250] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.747 [2024-07-24 23:17:50.141259] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.747 [2024-07-24 23:17:50.142774] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.747 [2024-07-24 23:17:50.151940] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.747 [2024-07-24 23:17:50.152390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.748 [2024-07-24 23:17:50.152772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.748 [2024-07-24 23:17:50.152814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.748 [2024-07-24 23:17:50.152847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.748 [2024-07-24 23:17:50.153236] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.748 [2024-07-24 23:17:50.153535] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.748 [2024-07-24 23:17:50.153547] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.748 [2024-07-24 23:17:50.153556] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.748 [2024-07-24 23:17:50.155059] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.748 [2024-07-24 23:17:50.163753] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.748 [2024-07-24 23:17:50.164196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.748 [2024-07-24 23:17:50.164503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.748 [2024-07-24 23:17:50.164543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:17.748 [2024-07-24 23:17:50.164576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:17.748 [2024-07-24 23:17:50.164978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:17.748 [2024-07-24 23:17:50.165214] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.748 [2024-07-24 23:17:50.165225] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.748 [2024-07-24 23:17:50.165238] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.748 [2024-07-24 23:17:50.166883] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.008 [2024-07-24 23:17:50.175540] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.008 [2024-07-24 23:17:50.175927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.008 [2024-07-24 23:17:50.176239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.008 [2024-07-24 23:17:50.176281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.008 [2024-07-24 23:17:50.176313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.008 [2024-07-24 23:17:50.176535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.008 [2024-07-24 23:17:50.176655] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.008 [2024-07-24 23:17:50.176666] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.008 [2024-07-24 23:17:50.176675] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.008 [2024-07-24 23:17:50.178286] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.008 [2024-07-24 23:17:50.187286] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.008 [2024-07-24 23:17:50.187792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.008 [2024-07-24 23:17:50.188140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.008 [2024-07-24 23:17:50.188181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.008 [2024-07-24 23:17:50.188208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.008 [2024-07-24 23:17:50.188286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.008 [2024-07-24 23:17:50.188366] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.008 [2024-07-24 23:17:50.188375] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.008 [2024-07-24 23:17:50.188384] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.008 [2024-07-24 23:17:50.190657] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.008 [2024-07-24 23:17:50.199489] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.008 [2024-07-24 23:17:50.199919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.008 [2024-07-24 23:17:50.200302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.008 [2024-07-24 23:17:50.200342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.008 [2024-07-24 23:17:50.200376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.008 [2024-07-24 23:17:50.200555] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.008 [2024-07-24 23:17:50.200648] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.008 [2024-07-24 23:17:50.200659] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.008 [2024-07-24 23:17:50.200668] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.008 [2024-07-24 23:17:50.202212] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.008 [2024-07-24 23:17:50.211145] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.008 [2024-07-24 23:17:50.211590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.008 [2024-07-24 23:17:50.211859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.008 [2024-07-24 23:17:50.211903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.008 [2024-07-24 23:17:50.211936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.008 [2024-07-24 23:17:50.212389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.008 [2024-07-24 23:17:50.212496] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.008 [2024-07-24 23:17:50.212506] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.008 [2024-07-24 23:17:50.212515] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.008 [2024-07-24 23:17:50.214103] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.008 [2024-07-24 23:17:50.222962] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.008 [2024-07-24 23:17:50.223334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.008 [2024-07-24 23:17:50.223594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.008 [2024-07-24 23:17:50.223634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.008 [2024-07-24 23:17:50.223666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.008 [2024-07-24 23:17:50.224071] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.008 [2024-07-24 23:17:50.224464] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.009 [2024-07-24 23:17:50.224499] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.009 [2024-07-24 23:17:50.224531] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.009 [2024-07-24 23:17:50.226232] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.009 [2024-07-24 23:17:50.234675] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.009 [2024-07-24 23:17:50.235124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.009 [2024-07-24 23:17:50.235451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.009 [2024-07-24 23:17:50.235491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.009 [2024-07-24 23:17:50.235523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.009 [2024-07-24 23:17:50.235847] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.009 [2024-07-24 23:17:50.235959] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.009 [2024-07-24 23:17:50.235971] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.009 [2024-07-24 23:17:50.235981] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.009 [2024-07-24 23:17:50.237673] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.009 [2024-07-24 23:17:50.246446] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.009 [2024-07-24 23:17:50.246815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.009 [2024-07-24 23:17:50.247024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.009 [2024-07-24 23:17:50.247038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.009 [2024-07-24 23:17:50.247048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.009 [2024-07-24 23:17:50.247177] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.009 [2024-07-24 23:17:50.247276] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.009 [2024-07-24 23:17:50.247286] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.009 [2024-07-24 23:17:50.247296] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.009 [2024-07-24 23:17:50.248985] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.009 [2024-07-24 23:17:50.258318] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.009 [2024-07-24 23:17:50.258792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.009 [2024-07-24 23:17:50.258992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.009 [2024-07-24 23:17:50.259005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.009 [2024-07-24 23:17:50.259015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.009 [2024-07-24 23:17:50.259158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.009 [2024-07-24 23:17:50.259258] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.009 [2024-07-24 23:17:50.259270] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.009 [2024-07-24 23:17:50.259280] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.009 [2024-07-24 23:17:50.260967] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.009 [2024-07-24 23:17:50.270297] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.009 [2024-07-24 23:17:50.270725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.009 [2024-07-24 23:17:50.270916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.009 [2024-07-24 23:17:50.270929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.009 [2024-07-24 23:17:50.270939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.009 [2024-07-24 23:17:50.271095] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.009 [2024-07-24 23:17:50.271252] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.009 [2024-07-24 23:17:50.271263] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.009 [2024-07-24 23:17:50.271273] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.009 [2024-07-24 23:17:50.272988] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.009 [2024-07-24 23:17:50.282279] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.009 [2024-07-24 23:17:50.282745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.009 [2024-07-24 23:17:50.282987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.009 [2024-07-24 23:17:50.283000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.009 [2024-07-24 23:17:50.283011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.009 [2024-07-24 23:17:50.283110] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.009 [2024-07-24 23:17:50.283223] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.009 [2024-07-24 23:17:50.283234] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.009 [2024-07-24 23:17:50.283243] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.009 [2024-07-24 23:17:50.284975] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.009 [2024-07-24 23:17:50.294197] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.009 [2024-07-24 23:17:50.294674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.009 [2024-07-24 23:17:50.294932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.009 [2024-07-24 23:17:50.294947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.009 [2024-07-24 23:17:50.294957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.009 [2024-07-24 23:17:50.295084] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.009 [2024-07-24 23:17:50.295169] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.009 [2024-07-24 23:17:50.295180] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.009 [2024-07-24 23:17:50.295190] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.009 [2024-07-24 23:17:50.296870] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.009 [2024-07-24 23:17:50.306170] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.009 [2024-07-24 23:17:50.306557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.009 [2024-07-24 23:17:50.306809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.009 [2024-07-24 23:17:50.306857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.009 [2024-07-24 23:17:50.306889] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.009 [2024-07-24 23:17:50.307280] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.009 [2024-07-24 23:17:50.307610] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.009 [2024-07-24 23:17:50.307623] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.009 [2024-07-24 23:17:50.307632] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.009 [2024-07-24 23:17:50.309293] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.009 [2024-07-24 23:17:50.317929] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.009 [2024-07-24 23:17:50.318330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.009 [2024-07-24 23:17:50.318661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.009 [2024-07-24 23:17:50.318709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.009 [2024-07-24 23:17:50.318757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.009 [2024-07-24 23:17:50.319098] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.009 [2024-07-24 23:17:50.319543] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.009 [2024-07-24 23:17:50.319579] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.009 [2024-07-24 23:17:50.319610] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.009 [2024-07-24 23:17:50.321750] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.009 [2024-07-24 23:17:50.330205] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.009 [2024-07-24 23:17:50.330657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.009 [2024-07-24 23:17:50.330975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.009 [2024-07-24 23:17:50.330990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.009 [2024-07-24 23:17:50.331000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.009 [2024-07-24 23:17:50.331115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.009 [2024-07-24 23:17:50.331229] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.009 [2024-07-24 23:17:50.331240] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.009 [2024-07-24 23:17:50.331249] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.010 [2024-07-24 23:17:50.332995] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.010 [2024-07-24 23:17:50.342026] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.010 [2024-07-24 23:17:50.342490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.010 [2024-07-24 23:17:50.342783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.010 [2024-07-24 23:17:50.342826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.010 [2024-07-24 23:17:50.342859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.010 [2024-07-24 23:17:50.343248] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.010 [2024-07-24 23:17:50.343617] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.010 [2024-07-24 23:17:50.343628] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.010 [2024-07-24 23:17:50.343638] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.010 [2024-07-24 23:17:50.345239] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.010 [2024-07-24 23:17:50.353952] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.010 [2024-07-24 23:17:50.354289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.010 [2024-07-24 23:17:50.354535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.010 [2024-07-24 23:17:50.354548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.010 [2024-07-24 23:17:50.354560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.010 [2024-07-24 23:17:50.354625] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.010 [2024-07-24 23:17:50.354748] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.010 [2024-07-24 23:17:50.354759] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.010 [2024-07-24 23:17:50.354768] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.010 [2024-07-24 23:17:50.356583] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.010 [2024-07-24 23:17:50.365581] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.010 [2024-07-24 23:17:50.366034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.010 [2024-07-24 23:17:50.366391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.010 [2024-07-24 23:17:50.366432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.010 [2024-07-24 23:17:50.366464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.010 [2024-07-24 23:17:50.366694] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.010 [2024-07-24 23:17:50.366840] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.010 [2024-07-24 23:17:50.366853] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.010 [2024-07-24 23:17:50.366861] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.010 [2024-07-24 23:17:50.368478] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.010 [2024-07-24 23:17:50.377473] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.010 [2024-07-24 23:17:50.377928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.010 [2024-07-24 23:17:50.378174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.010 [2024-07-24 23:17:50.378206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.010 [2024-07-24 23:17:50.378239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.010 [2024-07-24 23:17:50.378738] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.010 [2024-07-24 23:17:50.378838] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.010 [2024-07-24 23:17:50.378850] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.010 [2024-07-24 23:17:50.378860] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.010 [2024-07-24 23:17:50.380791] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.010 [2024-07-24 23:17:50.389096] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.010 [2024-07-24 23:17:50.389544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.010 [2024-07-24 23:17:50.389803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.010 [2024-07-24 23:17:50.389846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.010 [2024-07-24 23:17:50.389878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.010 [2024-07-24 23:17:50.390172] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.010 [2024-07-24 23:17:50.390253] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.010 [2024-07-24 23:17:50.390264] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.010 [2024-07-24 23:17:50.390272] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.010 [2024-07-24 23:17:50.392445] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.010 [2024-07-24 23:17:50.401404] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.010 [2024-07-24 23:17:50.401915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.010 [2024-07-24 23:17:50.402115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.010 [2024-07-24 23:17:50.402127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.010 [2024-07-24 23:17:50.402137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.010 [2024-07-24 23:17:50.402228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.010 [2024-07-24 23:17:50.402334] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.010 [2024-07-24 23:17:50.402343] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.010 [2024-07-24 23:17:50.402352] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.010 [2024-07-24 23:17:50.403766] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.010 [2024-07-24 23:17:50.413135] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.010 [2024-07-24 23:17:50.413532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.010 [2024-07-24 23:17:50.413855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.010 [2024-07-24 23:17:50.413898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.010 [2024-07-24 23:17:50.413933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.010 [2024-07-24 23:17:50.414243] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.010 [2024-07-24 23:17:50.414355] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.010 [2024-07-24 23:17:50.414367] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.010 [2024-07-24 23:17:50.414375] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.010 [2024-07-24 23:17:50.416009] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.010 [2024-07-24 23:17:50.424967] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.010 [2024-07-24 23:17:50.425310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.010 [2024-07-24 23:17:50.425566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.010 [2024-07-24 23:17:50.425608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.010 [2024-07-24 23:17:50.425640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.010 [2024-07-24 23:17:50.425994] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.010 [2024-07-24 23:17:50.426104] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.010 [2024-07-24 23:17:50.426114] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.010 [2024-07-24 23:17:50.426124] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.010 [2024-07-24 23:17:50.427632] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.271 [2024-07-24 23:17:50.436947] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.271 [2024-07-24 23:17:50.437374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.271 [2024-07-24 23:17:50.437748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.271 [2024-07-24 23:17:50.437788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.271 [2024-07-24 23:17:50.437798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.271 [2024-07-24 23:17:50.437883] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.271 [2024-07-24 23:17:50.438034] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.271 [2024-07-24 23:17:50.438046] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.271 [2024-07-24 23:17:50.438054] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.271 [2024-07-24 23:17:50.439667] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.271 [2024-07-24 23:17:50.448774] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.271 [2024-07-24 23:17:50.449141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.271 [2024-07-24 23:17:50.449398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.271 [2024-07-24 23:17:50.449438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.271 [2024-07-24 23:17:50.449470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.271 [2024-07-24 23:17:50.449933] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.271 [2024-07-24 23:17:50.450014] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.271 [2024-07-24 23:17:50.450024] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.271 [2024-07-24 23:17:50.450034] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.271 [2024-07-24 23:17:50.451495] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.271 [2024-07-24 23:17:50.460481] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.271 [2024-07-24 23:17:50.460968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.271 [2024-07-24 23:17:50.461269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.271 [2024-07-24 23:17:50.461309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.271 [2024-07-24 23:17:50.461341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.271 [2024-07-24 23:17:50.461681] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.271 [2024-07-24 23:17:50.462033] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.271 [2024-07-24 23:17:50.462076] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.271 [2024-07-24 23:17:50.462109] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.271 [2024-07-24 23:17:50.463740] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.271 [2024-07-24 23:17:50.472359] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.271 [2024-07-24 23:17:50.472842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.271 [2024-07-24 23:17:50.473142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.271 [2024-07-24 23:17:50.473155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.271 [2024-07-24 23:17:50.473164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.271 [2024-07-24 23:17:50.473282] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.271 [2024-07-24 23:17:50.473401] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.271 [2024-07-24 23:17:50.473411] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.271 [2024-07-24 23:17:50.473419] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.271 [2024-07-24 23:17:50.474966] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.271 [2024-07-24 23:17:50.484264] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.271 [2024-07-24 23:17:50.484733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.271 [2024-07-24 23:17:50.484989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.271 [2024-07-24 23:17:50.485029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.271 [2024-07-24 23:17:50.485061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.271 [2024-07-24 23:17:50.485266] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.271 [2024-07-24 23:17:50.485359] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.271 [2024-07-24 23:17:50.485370] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.271 [2024-07-24 23:17:50.485379] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.271 [2024-07-24 23:17:50.487180] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3405915 Killed "${NVMF_APP[@]}" "$@" 00:32:18.271 23:17:50 -- host/bdevperf.sh@36 -- # tgt_init 00:32:18.271 23:17:50 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:18.271 23:17:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:18.271 23:17:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:18.271 [2024-07-24 23:17:50.496213] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.271 23:17:50 -- common/autotest_common.sh@10 -- # set +x 00:32:18.271 [2024-07-24 23:17:50.496653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.271 [2024-07-24 23:17:50.496933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.271 [2024-07-24 23:17:50.496947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.271 [2024-07-24 23:17:50.496958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.271 [2024-07-24 23:17:50.497060] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.271 [2024-07-24 23:17:50.497160] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.271 [2024-07-24 23:17:50.497171] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.271 [2024-07-24 23:17:50.497180] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.271 [2024-07-24 23:17:50.498802] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.271 23:17:50 -- nvmf/common.sh@469 -- # nvmfpid=3407390 00:32:18.272 23:17:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:18.272 23:17:50 -- nvmf/common.sh@470 -- # waitforlisten 3407390 00:32:18.272 23:17:50 -- common/autotest_common.sh@819 -- # '[' -z 3407390 ']' 00:32:18.272 23:17:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.272 23:17:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:18.272 23:17:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.272 23:17:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:18.272 23:17:50 -- common/autotest_common.sh@10 -- # set +x 00:32:18.272 [2024-07-24 23:17:50.508026] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.272 [2024-07-24 23:17:50.508368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.272 [2024-07-24 23:17:50.508648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.272 [2024-07-24 23:17:50.508662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.272 [2024-07-24 23:17:50.508672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.272 [2024-07-24 23:17:50.508808] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.272 [2024-07-24 23:17:50.508967] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.272 [2024-07-24 23:17:50.508979] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.272 [2024-07-24 23:17:50.508989] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.272 [2024-07-24 23:17:50.510762] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.272 [2024-07-24 23:17:50.519939] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.272 [2024-07-24 23:17:50.520292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.272 [2024-07-24 23:17:50.520622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.272 [2024-07-24 23:17:50.520645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.272 [2024-07-24 23:17:50.520655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.272 [2024-07-24 23:17:50.520791] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.272 [2024-07-24 23:17:50.520862] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.272 [2024-07-24 23:17:50.520873] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.272 [2024-07-24 23:17:50.520882] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.272 [2024-07-24 23:17:50.522522] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.272 [2024-07-24 23:17:50.531848] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.272 [2024-07-24 23:17:50.532310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.272 [2024-07-24 23:17:50.532571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.272 [2024-07-24 23:17:50.532584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.272 [2024-07-24 23:17:50.532594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.272 [2024-07-24 23:17:50.532726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.272 [2024-07-24 23:17:50.532870] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.272 [2024-07-24 23:17:50.532880] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.272 [2024-07-24 23:17:50.532890] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.272 [2024-07-24 23:17:50.534503] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.272 [2024-07-24 23:17:50.543847] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.272 [2024-07-24 23:17:50.544270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.272 [2024-07-24 23:17:50.544538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.272 [2024-07-24 23:17:50.544551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.272 [2024-07-24 23:17:50.544560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.272 [2024-07-24 23:17:50.544698] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.272 [2024-07-24 23:17:50.544816] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.272 [2024-07-24 23:17:50.544827] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.272 [2024-07-24 23:17:50.544837] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.272 [2024-07-24 23:17:50.546338] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.272 [2024-07-24 23:17:50.552038] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:18.272 [2024-07-24 23:17:50.552084] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:18.272 [2024-07-24 23:17:50.555820] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.272 [2024-07-24 23:17:50.556205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.272 [2024-07-24 23:17:50.556450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.272 [2024-07-24 23:17:50.556464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.272 [2024-07-24 23:17:50.556474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.272 [2024-07-24 23:17:50.556558] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.272 [2024-07-24 23:17:50.556629] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.272 [2024-07-24 23:17:50.556640] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.272 [2024-07-24 23:17:50.556654] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.272 [2024-07-24 23:17:50.558431] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.272 [2024-07-24 23:17:50.567745] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.272 [2024-07-24 23:17:50.568146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.272 [2024-07-24 23:17:50.568408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.272 [2024-07-24 23:17:50.568422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.272 [2024-07-24 23:17:50.568432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.272 [2024-07-24 23:17:50.568560] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.272 [2024-07-24 23:17:50.568674] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.272 [2024-07-24 23:17:50.568686] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.272 [2024-07-24 23:17:50.568696] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.272 [2024-07-24 23:17:50.570400] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.272 [2024-07-24 23:17:50.579607] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.272 [2024-07-24 23:17:50.579932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.272 [2024-07-24 23:17:50.580270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.272 [2024-07-24 23:17:50.580284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.272 [2024-07-24 23:17:50.580294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.272 [2024-07-24 23:17:50.580418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.272 [2024-07-24 23:17:50.580515] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.272 [2024-07-24 23:17:50.580525] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.272 [2024-07-24 23:17:50.580535] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.272 [2024-07-24 23:17:50.582233] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.272 [2024-07-24 23:17:50.591492] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.272 [2024-07-24 23:17:50.591950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.272 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.272 [2024-07-24 23:17:50.592248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.272 [2024-07-24 23:17:50.592261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.272 [2024-07-24 23:17:50.592271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.272 [2024-07-24 23:17:50.592410] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.272 [2024-07-24 23:17:50.592535] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.272 [2024-07-24 23:17:50.592546] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.272 [2024-07-24 23:17:50.592555] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.272 [2024-07-24 23:17:50.594302] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.272 [2024-07-24 23:17:50.603437] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.272 [2024-07-24 23:17:50.603841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.272 [2024-07-24 23:17:50.604112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.273 [2024-07-24 23:17:50.604125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.273 [2024-07-24 23:17:50.604135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.273 [2024-07-24 23:17:50.604264] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.273 [2024-07-24 23:17:50.604378] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.273 [2024-07-24 23:17:50.604390] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.273 [2024-07-24 23:17:50.604400] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.273 [2024-07-24 23:17:50.606147] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.273 [2024-07-24 23:17:50.615416] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.273 [2024-07-24 23:17:50.615924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.273 [2024-07-24 23:17:50.616194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.273 [2024-07-24 23:17:50.616207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.273 [2024-07-24 23:17:50.616217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.273 [2024-07-24 23:17:50.616356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.273 [2024-07-24 23:17:50.616453] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.273 [2024-07-24 23:17:50.616465] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.273 [2024-07-24 23:17:50.616474] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.273 [2024-07-24 23:17:50.618059] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.273 [2024-07-24 23:17:50.627244] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.273 [2024-07-24 23:17:50.627622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.273 [2024-07-24 23:17:50.627865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.273 [2024-07-24 23:17:50.627880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.273 [2024-07-24 23:17:50.627891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.273 [2024-07-24 23:17:50.628031] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.273 [2024-07-24 23:17:50.628143] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.273 [2024-07-24 23:17:50.628155] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.273 [2024-07-24 23:17:50.628164] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.273 [2024-07-24 23:17:50.629796] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.273 [2024-07-24 23:17:50.631623] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:18.273 [2024-07-24 23:17:50.639110] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.273 [2024-07-24 23:17:50.639586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.273 [2024-07-24 23:17:50.639849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.273 [2024-07-24 23:17:50.639864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.273 [2024-07-24 23:17:50.639874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.273 [2024-07-24 23:17:50.639989] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.273 [2024-07-24 23:17:50.640074] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.273 [2024-07-24 23:17:50.640085] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.273 [2024-07-24 23:17:50.640094] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.273 [2024-07-24 23:17:50.641938] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.273 [2024-07-24 23:17:50.650917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.273 [2024-07-24 23:17:50.651290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.273 [2024-07-24 23:17:50.651545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.273 [2024-07-24 23:17:50.651559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.273 [2024-07-24 23:17:50.651570] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.273 [2024-07-24 23:17:50.651710] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.273 [2024-07-24 23:17:50.651846] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.273 [2024-07-24 23:17:50.651857] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.273 [2024-07-24 23:17:50.651867] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.273 [2024-07-24 23:17:50.653515] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.273 [2024-07-24 23:17:50.662831] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.273 [2024-07-24 23:17:50.663236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.273 [2024-07-24 23:17:50.663512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.273 [2024-07-24 23:17:50.663526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.273 [2024-07-24 23:17:50.663537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.273 [2024-07-24 23:17:50.663648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.273 [2024-07-24 23:17:50.663751] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.273 [2024-07-24 23:17:50.663762] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.273 [2024-07-24 23:17:50.663772] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.273 [2024-07-24 23:17:50.665477] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.273 [2024-07-24 23:17:50.670233] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:18.273 [2024-07-24 23:17:50.670334] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:18.273 [2024-07-24 23:17:50.670345] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:18.273 [2024-07-24 23:17:50.670354] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:18.273 [2024-07-24 23:17:50.670402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:18.273 [2024-07-24 23:17:50.670506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:18.273 [2024-07-24 23:17:50.670507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.273 [2024-07-24 23:17:50.674870] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.273 [2024-07-24 23:17:50.675245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.273 [2024-07-24 23:17:50.675498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.273 [2024-07-24 23:17:50.675512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.273 [2024-07-24 23:17:50.675523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.273 [2024-07-24 23:17:50.675670] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.273 [2024-07-24 23:17:50.675804] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.273 [2024-07-24 23:17:50.675817] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.273 [2024-07-24 23:17:50.675828] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.273 [2024-07-24 23:17:50.677667] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.273 [2024-07-24 23:17:50.686706] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.274 [2024-07-24 23:17:50.687164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.274 [2024-07-24 23:17:50.687410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.274 [2024-07-24 23:17:50.687424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.274 [2024-07-24 23:17:50.687435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.274 [2024-07-24 23:17:50.687565] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.274 [2024-07-24 23:17:50.687668] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.274 [2024-07-24 23:17:50.687679] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.274 [2024-07-24 23:17:50.687689] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.274 [2024-07-24 23:17:50.689282] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.274 [2024-07-24 23:17:50.698577] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.274 [2024-07-24 23:17:50.699030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.533 [2024-07-24 23:17:50.699329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.533 [2024-07-24 23:17:50.699344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.533 [2024-07-24 23:17:50.699355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.533 [2024-07-24 23:17:50.699456] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.533 [2024-07-24 23:17:50.699620] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.533 [2024-07-24 23:17:50.699632] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.533 [2024-07-24 23:17:50.699642] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.533 [2024-07-24 23:17:50.701489] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.533 [2024-07-24 23:17:50.710432] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.533 [2024-07-24 23:17:50.710867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.533 [2024-07-24 23:17:50.711113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.533 [2024-07-24 23:17:50.711128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.533 [2024-07-24 23:17:50.711140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.533 [2024-07-24 23:17:50.711285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.533 [2024-07-24 23:17:50.711430] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.534 [2024-07-24 23:17:50.711442] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.534 [2024-07-24 23:17:50.711452] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.534 [2024-07-24 23:17:50.713142] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.534 [2024-07-24 23:17:50.722391] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.534 [2024-07-24 23:17:50.722838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.534 [2024-07-24 23:17:50.723034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.534 [2024-07-24 23:17:50.723048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.534 [2024-07-24 23:17:50.723059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.534 [2024-07-24 23:17:50.723203] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.534 [2024-07-24 23:17:50.723347] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.534 [2024-07-24 23:17:50.723358] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.534 [2024-07-24 23:17:50.723367] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.534 [2024-07-24 23:17:50.725144] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.534 [2024-07-24 23:17:50.734288] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.534 [2024-07-24 23:17:50.734735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.534 [2024-07-24 23:17:50.735054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.534 [2024-07-24 23:17:50.735067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.534 [2024-07-24 23:17:50.735079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.534 [2024-07-24 23:17:50.735179] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.534 [2024-07-24 23:17:50.735309] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.534 [2024-07-24 23:17:50.735325] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.534 [2024-07-24 23:17:50.735334] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.534 [2024-07-24 23:17:50.737052] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.534 [2024-07-24 23:17:50.746142] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.534 [2024-07-24 23:17:50.746647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.534 [2024-07-24 23:17:50.746902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.534 [2024-07-24 23:17:50.746917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.534 [2024-07-24 23:17:50.746927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.534 [2024-07-24 23:17:50.747056] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.534 [2024-07-24 23:17:50.747184] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.534 [2024-07-24 23:17:50.747196] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.534 [2024-07-24 23:17:50.747206] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.534 [2024-07-24 23:17:50.749036] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.534 [2024-07-24 23:17:50.757997] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.534 [2024-07-24 23:17:50.758398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.534 [2024-07-24 23:17:50.758741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.534 [2024-07-24 23:17:50.758756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.534 [2024-07-24 23:17:50.758766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.534 [2024-07-24 23:17:50.758851] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.534 [2024-07-24 23:17:50.758965] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.534 [2024-07-24 23:17:50.758976] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.534 [2024-07-24 23:17:50.758985] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.534 [2024-07-24 23:17:50.760758] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.534 [2024-07-24 23:17:50.769819] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.534 [2024-07-24 23:17:50.770201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.534 [2024-07-24 23:17:50.770400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.534 [2024-07-24 23:17:50.770414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.534 [2024-07-24 23:17:50.770424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.534 [2024-07-24 23:17:50.770524] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.534 [2024-07-24 23:17:50.770666] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.534 [2024-07-24 23:17:50.770677] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.534 [2024-07-24 23:17:50.770690] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.534 [2024-07-24 23:17:50.772351] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.534 [2024-07-24 23:17:50.781666] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.534 [2024-07-24 23:17:50.782096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.534 [2024-07-24 23:17:50.782359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.534 [2024-07-24 23:17:50.782372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.534 [2024-07-24 23:17:50.782382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.534 [2024-07-24 23:17:50.782496] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.534 [2024-07-24 23:17:50.782595] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.534 [2024-07-24 23:17:50.782605] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.534 [2024-07-24 23:17:50.782614] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.534 [2024-07-24 23:17:50.784331] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.534 [2024-07-24 23:17:50.793530] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.534 [2024-07-24 23:17:50.793967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.534 [2024-07-24 23:17:50.794233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.534 [2024-07-24 23:17:50.794246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.534 [2024-07-24 23:17:50.794256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.534 [2024-07-24 23:17:50.794367] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.534 [2024-07-24 23:17:50.794463] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.534 [2024-07-24 23:17:50.794474] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.534 [2024-07-24 23:17:50.794483] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.534 [2024-07-24 23:17:50.796180] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.534 [2024-07-24 23:17:50.805500] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.534 [2024-07-24 23:17:50.805986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.534 [2024-07-24 23:17:50.806233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.534 [2024-07-24 23:17:50.806247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.534 [2024-07-24 23:17:50.806257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.534 [2024-07-24 23:17:50.806357] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.534 [2024-07-24 23:17:50.806500] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.534 [2024-07-24 23:17:50.806511] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.534 [2024-07-24 23:17:50.806521] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.534 [2024-07-24 23:17:50.808235] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.534 [2024-07-24 23:17:50.817364] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.534 [2024-07-24 23:17:50.817736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.534 [2024-07-24 23:17:50.817934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.534 [2024-07-24 23:17:50.817948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.534 [2024-07-24 23:17:50.817958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.534 [2024-07-24 23:17:50.818044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.534 [2024-07-24 23:17:50.818187] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.534 [2024-07-24 23:17:50.818198] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.534 [2024-07-24 23:17:50.818208] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.534 [2024-07-24 23:17:50.819967] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.535 [2024-07-24 23:17:50.829313] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.535 [2024-07-24 23:17:50.829795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.535 [2024-07-24 23:17:50.830044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.535 [2024-07-24 23:17:50.830058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.535 [2024-07-24 23:17:50.830068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.535 [2024-07-24 23:17:50.830181] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.535 [2024-07-24 23:17:50.830281] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.535 [2024-07-24 23:17:50.830291] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.535 [2024-07-24 23:17:50.830300] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.535 [2024-07-24 23:17:50.831904] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.535 [2024-07-24 23:17:50.841260] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.535 [2024-07-24 23:17:50.841683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.535 [2024-07-24 23:17:50.842002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.535 [2024-07-24 23:17:50.842017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.535 [2024-07-24 23:17:50.842026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.535 [2024-07-24 23:17:50.842153] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.535 [2024-07-24 23:17:50.842310] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.535 [2024-07-24 23:17:50.842322] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.535 [2024-07-24 23:17:50.842331] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.535 [2024-07-24 23:17:50.844064] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.535 [2024-07-24 23:17:50.853270] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.535 [2024-07-24 23:17:50.853749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.535 [2024-07-24 23:17:50.853988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.535 [2024-07-24 23:17:50.854002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.535 [2024-07-24 23:17:50.854012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.535 [2024-07-24 23:17:50.854154] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.535 [2024-07-24 23:17:50.854253] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.535 [2024-07-24 23:17:50.854265] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.535 [2024-07-24 23:17:50.854275] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.535 [2024-07-24 23:17:50.855907] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.535 [2024-07-24 23:17:50.865302] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.535 [2024-07-24 23:17:50.865757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.535 [2024-07-24 23:17:50.866054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.535 [2024-07-24 23:17:50.866068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.535 [2024-07-24 23:17:50.866078] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.535 [2024-07-24 23:17:50.866191] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.535 [2024-07-24 23:17:50.866306] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.535 [2024-07-24 23:17:50.866317] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.535 [2024-07-24 23:17:50.866326] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.535 [2024-07-24 23:17:50.868157] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.535 [2024-07-24 23:17:50.877259] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.535 [2024-07-24 23:17:50.877622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.535 [2024-07-24 23:17:50.877938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.535 [2024-07-24 23:17:50.877952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.535 [2024-07-24 23:17:50.877962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.535 [2024-07-24 23:17:50.878061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.535 [2024-07-24 23:17:50.878175] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.535 [2024-07-24 23:17:50.878185] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.535 [2024-07-24 23:17:50.878194] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.535 [2024-07-24 23:17:50.879809] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.535 [2024-07-24 23:17:50.889070] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.535 [2024-07-24 23:17:50.889511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.535 [2024-07-24 23:17:50.889828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.535 [2024-07-24 23:17:50.889842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.535 [2024-07-24 23:17:50.889852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.535 [2024-07-24 23:17:50.889966] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.535 [2024-07-24 23:17:50.890108] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.535 [2024-07-24 23:17:50.890120] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.535 [2024-07-24 23:17:50.890129] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.535 [2024-07-24 23:17:50.891812] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.535 [2024-07-24 23:17:50.900916] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.535 [2024-07-24 23:17:50.901358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.535 [2024-07-24 23:17:50.901641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.535 [2024-07-24 23:17:50.901655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.535 [2024-07-24 23:17:50.901665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.535 [2024-07-24 23:17:50.901812] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.535 [2024-07-24 23:17:50.901941] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.535 [2024-07-24 23:17:50.901952] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.535 [2024-07-24 23:17:50.901963] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.535 [2024-07-24 23:17:50.903786] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.535 [2024-07-24 23:17:50.912879] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.535 [2024-07-24 23:17:50.913279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.535 [2024-07-24 23:17:50.913597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.535 [2024-07-24 23:17:50.913610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.535 [2024-07-24 23:17:50.913620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.535 [2024-07-24 23:17:50.913725] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.535 [2024-07-24 23:17:50.913825] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.535 [2024-07-24 23:17:50.913836] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.535 [2024-07-24 23:17:50.913845] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.535 [2024-07-24 23:17:50.915657] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.535 [2024-07-24 23:17:50.924815] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.535 [2024-07-24 23:17:50.925278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.535 [2024-07-24 23:17:50.925610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.535 [2024-07-24 23:17:50.925623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.535 [2024-07-24 23:17:50.925635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.535 [2024-07-24 23:17:50.925784] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.535 [2024-07-24 23:17:50.925912] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.535 [2024-07-24 23:17:50.925923] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.535 [2024-07-24 23:17:50.925933] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.535 [2024-07-24 23:17:50.927589] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.536 [2024-07-24 23:17:50.936726] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.536 [2024-07-24 23:17:50.937119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.536 [2024-07-24 23:17:50.937364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.536 [2024-07-24 23:17:50.937378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.536 [2024-07-24 23:17:50.937388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.536 [2024-07-24 23:17:50.937530] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.536 [2024-07-24 23:17:50.937616] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.536 [2024-07-24 23:17:50.937628] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.536 [2024-07-24 23:17:50.937638] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.536 [2024-07-24 23:17:50.939395] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.536 [2024-07-24 23:17:50.948565] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.536 [2024-07-24 23:17:50.949016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.536 [2024-07-24 23:17:50.949340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.536 [2024-07-24 23:17:50.949354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.536 [2024-07-24 23:17:50.949364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.536 [2024-07-24 23:17:50.949478] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.536 [2024-07-24 23:17:50.949606] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.536 [2024-07-24 23:17:50.949618] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.536 [2024-07-24 23:17:50.949627] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.536 [2024-07-24 23:17:50.951244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.536 [2024-07-24 23:17:50.960416] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.536 [2024-07-24 23:17:50.960846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.536 [2024-07-24 23:17:50.961162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.536 [2024-07-24 23:17:50.961176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.536 [2024-07-24 23:17:50.961185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.536 [2024-07-24 23:17:50.961303] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.536 [2024-07-24 23:17:50.961402] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.536 [2024-07-24 23:17:50.961413] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.536 [2024-07-24 23:17:50.961422] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.795 [2024-07-24 23:17:50.963180] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.795 [2024-07-24 23:17:50.972363] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.795 [2024-07-24 23:17:50.972784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.795 [2024-07-24 23:17:50.973102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.795 [2024-07-24 23:17:50.973116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.795 [2024-07-24 23:17:50.973126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.795 [2024-07-24 23:17:50.973255] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.795 [2024-07-24 23:17:50.973340] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.795 [2024-07-24 23:17:50.973350] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.795 [2024-07-24 23:17:50.973359] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.795 [2024-07-24 23:17:50.975074] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.795 [2024-07-24 23:17:50.984240] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.795 [2024-07-24 23:17:50.984683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.795 [2024-07-24 23:17:50.984996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.795 [2024-07-24 23:17:50.985010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.795 [2024-07-24 23:17:50.985020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.795 [2024-07-24 23:17:50.985148] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.795 [2024-07-24 23:17:50.985232] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.795 [2024-07-24 23:17:50.985243] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.795 [2024-07-24 23:17:50.985252] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.795 [2024-07-24 23:17:50.986866] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.795 [2024-07-24 23:17:50.996166] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.795 [2024-07-24 23:17:50.996626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.795 [2024-07-24 23:17:50.996946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.795 [2024-07-24 23:17:50.996961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.795 [2024-07-24 23:17:50.996971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.795 [2024-07-24 23:17:50.997086] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.795 [2024-07-24 23:17:50.997218] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.795 [2024-07-24 23:17:50.997230] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.795 [2024-07-24 23:17:50.997239] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.795 [2024-07-24 23:17:50.998810] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.795 [2024-07-24 23:17:51.008207] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.795 [2024-07-24 23:17:51.008649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.795 [2024-07-24 23:17:51.008930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.795 [2024-07-24 23:17:51.008944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.796 [2024-07-24 23:17:51.008954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.796 [2024-07-24 23:17:51.009068] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.796 [2024-07-24 23:17:51.009182] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.796 [2024-07-24 23:17:51.009192] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.796 [2024-07-24 23:17:51.009201] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.796 [2024-07-24 23:17:51.010758] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.796 [2024-07-24 23:17:51.019962] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.796 [2024-07-24 23:17:51.020429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.796 [2024-07-24 23:17:51.020764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.796 [2024-07-24 23:17:51.020778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.796 [2024-07-24 23:17:51.020789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.796 [2024-07-24 23:17:51.020888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.796 [2024-07-24 23:17:51.021017] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.796 [2024-07-24 23:17:51.021027] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.796 [2024-07-24 23:17:51.021036] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.796 [2024-07-24 23:17:51.022690] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.796 [2024-07-24 23:17:51.031783] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.796 [2024-07-24 23:17:51.032235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.796 [2024-07-24 23:17:51.032565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.796 [2024-07-24 23:17:51.032579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.796 [2024-07-24 23:17:51.032589] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.796 [2024-07-24 23:17:51.032702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.796 [2024-07-24 23:17:51.032792] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.796 [2024-07-24 23:17:51.032806] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.796 [2024-07-24 23:17:51.032816] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.796 [2024-07-24 23:17:51.034427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.796 [2024-07-24 23:17:51.043723] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.796 [2024-07-24 23:17:51.044201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.796 [2024-07-24 23:17:51.044517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.796 [2024-07-24 23:17:51.044530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.796 [2024-07-24 23:17:51.044540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.796 [2024-07-24 23:17:51.044611] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.796 [2024-07-24 23:17:51.044731] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.796 [2024-07-24 23:17:51.044742] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.796 [2024-07-24 23:17:51.044753] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.796 [2024-07-24 23:17:51.046492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.796 [2024-07-24 23:17:51.055737] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.796 [2024-07-24 23:17:51.056128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.796 [2024-07-24 23:17:51.056440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.796 [2024-07-24 23:17:51.056453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.796 [2024-07-24 23:17:51.056463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.796 [2024-07-24 23:17:51.056577] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.796 [2024-07-24 23:17:51.056705] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.796 [2024-07-24 23:17:51.056722] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.796 [2024-07-24 23:17:51.056732] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.796 [2024-07-24 23:17:51.058386] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.796 [2024-07-24 23:17:51.067551] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.796 [2024-07-24 23:17:51.068008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.796 [2024-07-24 23:17:51.068269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.796 [2024-07-24 23:17:51.068283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.796 [2024-07-24 23:17:51.068293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.796 [2024-07-24 23:17:51.068421] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.796 [2024-07-24 23:17:51.068520] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.796 [2024-07-24 23:17:51.068530] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.796 [2024-07-24 23:17:51.068543] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.796 [2024-07-24 23:17:51.070240] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.796 [2024-07-24 23:17:51.079376] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.796 [2024-07-24 23:17:51.079794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.796 [2024-07-24 23:17:51.080038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.796 [2024-07-24 23:17:51.080052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.796 [2024-07-24 23:17:51.080062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.796 [2024-07-24 23:17:51.080176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.796 [2024-07-24 23:17:51.080289] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.796 [2024-07-24 23:17:51.080300] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.796 [2024-07-24 23:17:51.080309] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.796 [2024-07-24 23:17:51.081909] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.796 [2024-07-24 23:17:51.091224] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.796 [2024-07-24 23:17:51.091582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.796 [2024-07-24 23:17:51.091826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.796 [2024-07-24 23:17:51.091840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.796 [2024-07-24 23:17:51.091850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.796 [2024-07-24 23:17:51.091979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.796 [2024-07-24 23:17:51.092108] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.796 [2024-07-24 23:17:51.092120] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.796 [2024-07-24 23:17:51.092129] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.796 [2024-07-24 23:17:51.093925] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.796 [2024-07-24 23:17:51.103157] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.796 [2024-07-24 23:17:51.103598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.796 [2024-07-24 23:17:51.103917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.796 [2024-07-24 23:17:51.103931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.796 [2024-07-24 23:17:51.103941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.796 [2024-07-24 23:17:51.104055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.796 [2024-07-24 23:17:51.104168] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.796 [2024-07-24 23:17:51.104180] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.796 [2024-07-24 23:17:51.104190] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.796 [2024-07-24 23:17:51.106172] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.796 [2024-07-24 23:17:51.115043] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.796 [2024-07-24 23:17:51.115454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.796 [2024-07-24 23:17:51.115746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.796 [2024-07-24 23:17:51.115760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.796 [2024-07-24 23:17:51.115771] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.796 [2024-07-24 23:17:51.115900] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.796 [2024-07-24 23:17:51.116043] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.797 [2024-07-24 23:17:51.116055] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.797 [2024-07-24 23:17:51.116064] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.797 [2024-07-24 23:17:51.117647] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.797 [2024-07-24 23:17:51.126809] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.797 [2024-07-24 23:17:51.127269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.797 [2024-07-24 23:17:51.127618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.797 [2024-07-24 23:17:51.127631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.797 [2024-07-24 23:17:51.127641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.797 [2024-07-24 23:17:51.127760] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.797 [2024-07-24 23:17:51.127889] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.797 [2024-07-24 23:17:51.127900] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.797 [2024-07-24 23:17:51.127909] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.797 [2024-07-24 23:17:51.129778] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.797 [2024-07-24 23:17:51.138635] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.797 [2024-07-24 23:17:51.139110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.797 [2024-07-24 23:17:51.139373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.797 [2024-07-24 23:17:51.139386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.797 [2024-07-24 23:17:51.139396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.797 [2024-07-24 23:17:51.139510] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.797 [2024-07-24 23:17:51.139652] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.797 [2024-07-24 23:17:51.139664] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.797 [2024-07-24 23:17:51.139673] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.797 [2024-07-24 23:17:51.141260] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.797 [2024-07-24 23:17:51.150477] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.797 [2024-07-24 23:17:51.150912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.797 [2024-07-24 23:17:51.151166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.797 [2024-07-24 23:17:51.151180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.797 [2024-07-24 23:17:51.151190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.797 [2024-07-24 23:17:51.151289] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.797 [2024-07-24 23:17:51.151403] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.797 [2024-07-24 23:17:51.151413] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.797 [2024-07-24 23:17:51.151423] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.797 [2024-07-24 23:17:51.153097] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.797 [2024-07-24 23:17:51.162398] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.797 [2024-07-24 23:17:51.162879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.797 [2024-07-24 23:17:51.163164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.797 [2024-07-24 23:17:51.163178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.797 [2024-07-24 23:17:51.163188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.797 [2024-07-24 23:17:51.163287] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.797 [2024-07-24 23:17:51.163400] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.797 [2024-07-24 23:17:51.163411] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.797 [2024-07-24 23:17:51.163420] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.797 [2024-07-24 23:17:51.165306] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.797 [2024-07-24 23:17:51.174205] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.797 [2024-07-24 23:17:51.174624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.797 [2024-07-24 23:17:51.174915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.797 [2024-07-24 23:17:51.174930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.797 [2024-07-24 23:17:51.174940] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.797 [2024-07-24 23:17:51.175071] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.797 [2024-07-24 23:17:51.175200] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.797 [2024-07-24 23:17:51.175211] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.797 [2024-07-24 23:17:51.175221] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.797 [2024-07-24 23:17:51.176966] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.797 [2024-07-24 23:17:51.185998] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.797 [2024-07-24 23:17:51.186380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.797 [2024-07-24 23:17:51.186691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.797 [2024-07-24 23:17:51.186705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.797 [2024-07-24 23:17:51.186719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.797 [2024-07-24 23:17:51.186833] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.797 [2024-07-24 23:17:51.186932] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.797 [2024-07-24 23:17:51.186942] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.797 [2024-07-24 23:17:51.186951] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.797 [2024-07-24 23:17:51.188562] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.797 [2024-07-24 23:17:51.197846] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.797 [2024-07-24 23:17:51.198210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.797 [2024-07-24 23:17:51.198442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.797 [2024-07-24 23:17:51.198455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.797 [2024-07-24 23:17:51.198465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.797 [2024-07-24 23:17:51.198550] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.797 [2024-07-24 23:17:51.198692] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.797 [2024-07-24 23:17:51.198703] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.797 [2024-07-24 23:17:51.198712] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.797 [2024-07-24 23:17:51.200243] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.797 [2024-07-24 23:17:51.209826] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.797 [2024-07-24 23:17:51.210263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.797 [2024-07-24 23:17:51.210586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.797 [2024-07-24 23:17:51.210601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.797 [2024-07-24 23:17:51.210611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.797 [2024-07-24 23:17:51.210758] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.797 [2024-07-24 23:17:51.210887] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.797 [2024-07-24 23:17:51.210899] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.797 [2024-07-24 23:17:51.210909] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.797 [2024-07-24 23:17:51.212506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.797 [2024-07-24 23:17:51.221676] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.797 [2024-07-24 23:17:51.221994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.797 [2024-07-24 23:17:51.222234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.797 [2024-07-24 23:17:51.222251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:18.797 [2024-07-24 23:17:51.222262] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:18.797 [2024-07-24 23:17:51.222362] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:18.797 [2024-07-24 23:17:51.222505] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:18.797 [2024-07-24 23:17:51.222517] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:18.797 [2024-07-24 23:17:51.222526] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.057 [2024-07-24 23:17:51.224146] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.057 [2024-07-24 23:17:51.233625] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.057 [2024-07-24 23:17:51.233989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.057 [2024-07-24 23:17:51.234130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.057 [2024-07-24 23:17:51.234143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:19.057 [2024-07-24 23:17:51.234155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:19.057 [2024-07-24 23:17:51.234269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:19.057 [2024-07-24 23:17:51.234383] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.057 [2024-07-24 23:17:51.234395] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.057 [2024-07-24 23:17:51.234404] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.057 [2024-07-24 23:17:51.236020] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.057 [2024-07-24 23:17:51.245471] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.057 [2024-07-24 23:17:51.245847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.057 [2024-07-24 23:17:51.246059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.057 [2024-07-24 23:17:51.246073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:19.057 [2024-07-24 23:17:51.246083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:19.057 [2024-07-24 23:17:51.246169] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:19.057 [2024-07-24 23:17:51.246297] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.058 [2024-07-24 23:17:51.246308] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.058 [2024-07-24 23:17:51.246318] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.058 [2024-07-24 23:17:51.247989] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.058 [2024-07-24 23:17:51.257333] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.058 [2024-07-24 23:17:51.257787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.058 [2024-07-24 23:17:51.258108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.058 [2024-07-24 23:17:51.258122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:19.058 [2024-07-24 23:17:51.258135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:19.058 [2024-07-24 23:17:51.258249] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:19.058 [2024-07-24 23:17:51.258350] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.058 [2024-07-24 23:17:51.258361] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.058 [2024-07-24 23:17:51.258370] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.058 [2024-07-24 23:17:51.260141] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.058 [2024-07-24 23:17:51.269313] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.058 [2024-07-24 23:17:51.269737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.058 [2024-07-24 23:17:51.270014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.058 [2024-07-24 23:17:51.270027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:19.058 [2024-07-24 23:17:51.270039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:19.058 [2024-07-24 23:17:51.270167] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:19.058 [2024-07-24 23:17:51.270282] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.058 [2024-07-24 23:17:51.270294] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.058 [2024-07-24 23:17:51.270303] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.058 [2024-07-24 23:17:51.272220] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.058 [2024-07-24 23:17:51.281102] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.058 [2024-07-24 23:17:51.281448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.058 [2024-07-24 23:17:51.281721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.058 [2024-07-24 23:17:51.281736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:19.058 [2024-07-24 23:17:51.281749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:19.058 [2024-07-24 23:17:51.281851] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:19.058 [2024-07-24 23:17:51.281994] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.058 [2024-07-24 23:17:51.282005] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.058 [2024-07-24 23:17:51.282015] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.058 [2024-07-24 23:17:51.283874] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.058 [2024-07-24 23:17:51.293025] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.058 [2024-07-24 23:17:51.293488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.058 [2024-07-24 23:17:51.293783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.058 [2024-07-24 23:17:51.293797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:19.058 [2024-07-24 23:17:51.293807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:19.058 [2024-07-24 23:17:51.293897] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:19.058 [2024-07-24 23:17:51.294041] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.058 [2024-07-24 23:17:51.294052] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.058 [2024-07-24 23:17:51.294061] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.058 [2024-07-24 23:17:51.295807] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.058 [2024-07-24 23:17:51.304855] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.058 [2024-07-24 23:17:51.305246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.058 [2024-07-24 23:17:51.305495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.058 [2024-07-24 23:17:51.305509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:19.058 [2024-07-24 23:17:51.305518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:19.058 [2024-07-24 23:17:51.305619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:19.058 [2024-07-24 23:17:51.305751] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.058 [2024-07-24 23:17:51.305767] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.058 [2024-07-24 23:17:51.305778] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.058 [2024-07-24 23:17:51.307433] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.058 [2024-07-24 23:17:51.316816] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.058 [2024-07-24 23:17:51.317207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.058 [2024-07-24 23:17:51.317436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.058 [2024-07-24 23:17:51.317450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:19.058 [2024-07-24 23:17:51.317460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:19.058 [2024-07-24 23:17:51.317531] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:19.058 [2024-07-24 23:17:51.317660] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.058 [2024-07-24 23:17:51.317670] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.058 [2024-07-24 23:17:51.317679] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.058 [2024-07-24 23:17:51.319241] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.058 [2024-07-24 23:17:51.328585] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.058 [2024-07-24 23:17:51.329074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.058 [2024-07-24 23:17:51.329322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.058 [2024-07-24 23:17:51.329336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:19.058 [2024-07-24 23:17:51.329345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:19.058 [2024-07-24 23:17:51.329459] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:19.058 [2024-07-24 23:17:51.329548] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.058 [2024-07-24 23:17:51.329558] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.058 [2024-07-24 23:17:51.329567] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.058 [2024-07-24 23:17:51.331158] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.058 [2024-07-24 23:17:51.340474] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.058 [2024-07-24 23:17:51.340924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.058 [2024-07-24 23:17:51.341222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.058 [2024-07-24 23:17:51.341235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:19.058 [2024-07-24 23:17:51.341245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:19.058 [2024-07-24 23:17:51.341358] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:19.058 [2024-07-24 23:17:51.341429] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.058 [2024-07-24 23:17:51.341440] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.058 [2024-07-24 23:17:51.341450] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.058 [2024-07-24 23:17:51.343136] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.058 [2024-07-24 23:17:51.352346] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.058 [2024-07-24 23:17:51.352730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.058 [2024-07-24 23:17:51.352966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.058 [2024-07-24 23:17:51.352979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:19.058 [2024-07-24 23:17:51.352989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:19.058 [2024-07-24 23:17:51.353075] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:19.058 [2024-07-24 23:17:51.353188] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.058 [2024-07-24 23:17:51.353199] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.058 [2024-07-24 23:17:51.353208] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.058 23:17:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:19.058 23:17:51 -- common/autotest_common.sh@852 -- # return 0 00:32:19.059 23:17:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:19.059 23:17:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:19.059 23:17:51 -- common/autotest_common.sh@10 -- # set +x 00:32:19.059 [2024-07-24 23:17:51.354782] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.059 [2024-07-24 23:17:51.364274] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.059 [2024-07-24 23:17:51.364607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-07-24 23:17:51.364927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-07-24 23:17:51.364941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:19.059 [2024-07-24 23:17:51.364950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:19.059 [2024-07-24 23:17:51.365067] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:19.059 [2024-07-24 23:17:51.365210] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.059 [2024-07-24 23:17:51.365222] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.059 [2024-07-24 23:17:51.365232] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.059 [2024-07-24 23:17:51.367003] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.059 [2024-07-24 23:17:51.376200] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.059 [2024-07-24 23:17:51.376568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-07-24 23:17:51.376815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-07-24 23:17:51.376830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:19.059 [2024-07-24 23:17:51.376840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:19.059 [2024-07-24 23:17:51.376983] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:19.059 [2024-07-24 23:17:51.377112] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.059 [2024-07-24 23:17:51.377123] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.059 [2024-07-24 23:17:51.377133] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.059 [2024-07-24 23:17:51.378877] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.059 [2024-07-24 23:17:51.388159] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.059 [2024-07-24 23:17:51.388617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-07-24 23:17:51.388875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-07-24 23:17:51.388889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:19.059 [2024-07-24 23:17:51.388899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:19.059 [2024-07-24 23:17:51.389026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:19.059 [2024-07-24 23:17:51.389141] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.059 [2024-07-24 23:17:51.389152] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.059 [2024-07-24 23:17:51.389161] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.059 [2024-07-24 23:17:51.390751] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.059 23:17:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:19.059 23:17:51 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:19.059 23:17:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:19.059 23:17:51 -- common/autotest_common.sh@10 -- # set +x 00:32:19.059 [2024-07-24 23:17:51.400175] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.059 [2024-07-24 23:17:51.400516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-07-24 23:17:51.400719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-07-24 23:17:51.400733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:19.059 [2024-07-24 23:17:51.400745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:19.059 [2024-07-24 23:17:51.400846] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:19.059 [2024-07-24 23:17:51.400959] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.059 [2024-07-24 23:17:51.400970] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.059 [2024-07-24 23:17:51.400979] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.059 [2024-07-24 23:17:51.402161] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:19.059 [2024-07-24 23:17:51.402763] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.059 23:17:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:19.059 23:17:51 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:19.059 23:17:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:19.059 23:17:51 -- common/autotest_common.sh@10 -- # set +x 00:32:19.059 [2024-07-24 23:17:51.411900] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.059 [2024-07-24 23:17:51.412211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-07-24 23:17:51.412475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-07-24 23:17:51.412489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:19.059 [2024-07-24 23:17:51.412498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:19.059 [2024-07-24 23:17:51.412626] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:19.059 [2024-07-24 23:17:51.412744] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.059 [2024-07-24 23:17:51.412756] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.059 [2024-07-24 23:17:51.412765] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.059 [2024-07-24 23:17:51.414444] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.059 [2024-07-24 23:17:51.423838] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.059 [2024-07-24 23:17:51.424176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-07-24 23:17:51.424422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-07-24 23:17:51.424435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:19.059 [2024-07-24 23:17:51.424445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:19.059 [2024-07-24 23:17:51.424601] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:19.059 [2024-07-24 23:17:51.424734] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.059 [2024-07-24 23:17:51.424746] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.059 [2024-07-24 23:17:51.424755] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.059 [2024-07-24 23:17:51.426651] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.059 [2024-07-24 23:17:51.435831] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.059 [2024-07-24 23:17:51.436287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-07-24 23:17:51.436513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-07-24 23:17:51.436526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:19.059 [2024-07-24 23:17:51.436536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:19.059 [2024-07-24 23:17:51.436650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:19.059 [2024-07-24 23:17:51.436769] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.059 [2024-07-24 23:17:51.436785] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.059 [2024-07-24 23:17:51.436794] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.059 [2024-07-24 23:17:51.438550] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.059 Malloc0 00:32:19.059 23:17:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:19.059 23:17:51 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:19.059 23:17:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:19.059 23:17:51 -- common/autotest_common.sh@10 -- # set +x 00:32:19.059 [2024-07-24 23:17:51.447659] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.059 [2024-07-24 23:17:51.448122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-07-24 23:17:51.448442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-07-24 23:17:51.448456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:19.059 [2024-07-24 23:17:51.448466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:19.059 [2024-07-24 23:17:51.448609] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:19.059 [2024-07-24 23:17:51.448742] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.059 [2024-07-24 23:17:51.448754] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.059 [2024-07-24 23:17:51.448765] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.059 [2024-07-24 23:17:51.450600] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.059 23:17:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:19.059 23:17:51 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:19.059 23:17:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:19.060 23:17:51 -- common/autotest_common.sh@10 -- # set +x 00:32:19.060 [2024-07-24 23:17:51.459577] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.060 [2024-07-24 23:17:51.460040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-07-24 23:17:51.460365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-07-24 23:17:51.460378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb05090 with addr=10.0.0.2, port=4420 00:32:19.060 [2024-07-24 23:17:51.460388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05090 is same with the state(5) to be set 00:32:19.060 [2024-07-24 23:17:51.460545] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05090 (9): Bad file descriptor 00:32:19.060 [2024-07-24 23:17:51.460645] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:19.060 [2024-07-24 23:17:51.460656] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:19.060 [2024-07-24 23:17:51.460665] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:19.060 [2024-07-24 23:17:51.462413] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.060 23:17:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:19.060 23:17:51 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:19.060 23:17:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:19.060 23:17:51 -- common/autotest_common.sh@10 -- # set +x 00:32:19.060 [2024-07-24 23:17:51.469220] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:19.060 [2024-07-24 23:17:51.471477] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:19.060 23:17:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:19.060 23:17:51 -- host/bdevperf.sh@38 -- # wait 3406471 00:32:19.318 [2024-07-24 23:17:51.619192] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:29.287 00:32:29.287 Latency(us) 00:32:29.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.287 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:29.287 Verification LBA range: start 0x0 length 0x4000 00:32:29.287 Nvme1n1 : 15.01 13058.79 51.01 20562.20 0.00 3795.99 1048.58 17301.50 00:32:29.287 =================================================================================================================== 00:32:29.287 Total : 13058.79 51.01 20562.20 0.00 3795.99 1048.58 17301.50 00:32:29.287 23:18:00 -- host/bdevperf.sh@39 -- # sync 00:32:29.287 23:18:00 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:29.287 23:18:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:29.287 23:18:00 -- common/autotest_common.sh@10 -- # set +x 00:32:29.287 23:18:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:29.287 23:18:00 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:32:29.287 23:18:00 -- host/bdevperf.sh@44 -- # nvmftestfini 00:32:29.287 23:18:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:29.287 23:18:00 -- nvmf/common.sh@116 -- # sync 00:32:29.287 23:18:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:29.287 23:18:00 -- nvmf/common.sh@119 -- # set +e 00:32:29.287 23:18:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:29.287 23:18:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:29.287 rmmod nvme_tcp 00:32:29.287 rmmod nvme_fabrics 00:32:29.287 rmmod nvme_keyring 00:32:29.287 23:18:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:29.287 23:18:00 -- nvmf/common.sh@123 -- # set -e 00:32:29.287 23:18:00 -- nvmf/common.sh@124 -- # return 0 00:32:29.287 23:18:00 -- nvmf/common.sh@477 -- # '[' -n 3407390 ']' 00:32:29.287 23:18:00 -- nvmf/common.sh@478 -- # killprocess 3407390 00:32:29.287 23:18:00 -- common/autotest_common.sh@926 -- # '[' -z 3407390 ']' 00:32:29.287 23:18:00 -- common/autotest_common.sh@930 -- # kill -0 3407390 00:32:29.287 23:18:00 -- common/autotest_common.sh@931 -- # uname 00:32:29.287 23:18:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:29.287 23:18:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3407390 00:32:29.287 23:18:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:29.287 23:18:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:29.287 23:18:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3407390' 00:32:29.287 killing process with pid 3407390 00:32:29.287 23:18:00 -- common/autotest_common.sh@945 -- # kill 3407390 00:32:29.287 23:18:00 -- common/autotest_common.sh@950 -- # wait 3407390 00:32:29.287 23:18:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:29.287 23:18:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:29.287 23:18:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:29.287 23:18:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:29.287 23:18:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:29.287 23:18:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.287 23:18:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:29.287 23:18:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.219 23:18:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:30.219 00:32:30.219 real 0m27.272s 00:32:30.219 user 1m2.028s 00:32:30.219 sys 0m8.051s 00:32:30.219 23:18:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:30.219 23:18:02 -- common/autotest_common.sh@10 -- # set +x 00:32:30.219 ************************************ 00:32:30.219 END TEST nvmf_bdevperf 00:32:30.219 ************************************ 00:32:30.219 23:18:02 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:30.219 23:18:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:30.219 23:18:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:30.219 23:18:02 -- common/autotest_common.sh@10 -- # set +x 00:32:30.219 ************************************ 00:32:30.219 START TEST nvmf_target_disconnect 00:32:30.219 ************************************ 00:32:30.219 23:18:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:30.219 * Looking for test storage... 00:32:30.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:30.219 23:18:02 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:30.219 23:18:02 -- nvmf/common.sh@7 -- # uname -s 00:32:30.219 23:18:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:30.219 23:18:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:30.219 23:18:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:30.219 23:18:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:30.219 23:18:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:30.219 23:18:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:30.219 23:18:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:30.219 23:18:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:30.219 23:18:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:30.219 23:18:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:30.219 23:18:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:30.219 23:18:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:30.219 23:18:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:30.219 23:18:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:30.219 23:18:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:30.219 23:18:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:30.219 23:18:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:30.219 23:18:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:30.219 23:18:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:30.219 23:18:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.219 23:18:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.219 23:18:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.220 23:18:02 -- paths/export.sh@5 -- # export PATH 00:32:30.220 23:18:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.220 23:18:02 -- nvmf/common.sh@46 -- # : 0 00:32:30.220 23:18:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:30.220 23:18:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:30.220 23:18:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:30.220 23:18:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:30.220 23:18:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:30.220 23:18:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:30.220 23:18:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:30.220 23:18:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:30.220 23:18:02 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:30.220 23:18:02 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:30.220 23:18:02 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:30.220 23:18:02 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:32:30.220 23:18:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:30.220 23:18:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:30.220 23:18:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:30.220 23:18:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:30.220 23:18:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:30.220 23:18:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.220 23:18:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:30.220 23:18:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.477 23:18:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:30.477 23:18:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:30.477 23:18:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:30.477 23:18:02 -- common/autotest_common.sh@10 -- # set +x 00:32:37.037 23:18:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:37.037 23:18:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:37.037 23:18:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:37.037 23:18:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:37.037 23:18:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:37.037 23:18:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:37.037 23:18:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:37.037 23:18:09 -- nvmf/common.sh@294 -- # net_devs=() 00:32:37.037 23:18:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:37.037 23:18:09 -- nvmf/common.sh@295 -- # e810=() 00:32:37.037 23:18:09 -- nvmf/common.sh@295 -- # local -ga e810 00:32:37.037 23:18:09 -- nvmf/common.sh@296 -- # x722=() 00:32:37.037 23:18:09 -- nvmf/common.sh@296 -- # local -ga x722 00:32:37.037 23:18:09 -- nvmf/common.sh@297 -- # mlx=() 00:32:37.037 23:18:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:37.037 23:18:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:37.037 23:18:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:37.037 23:18:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:37.037 23:18:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:37.037 23:18:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:37.037 23:18:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:37.037 23:18:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:37.037 23:18:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:37.037 23:18:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:37.037 23:18:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:37.037 23:18:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:37.037 23:18:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:37.037 23:18:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:37.037 23:18:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:37.037 23:18:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:37.037 23:18:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:37.037 23:18:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:37.037 23:18:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:37.037 23:18:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:37.037 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:37.037 23:18:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:37.037 23:18:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:37.037 23:18:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.037 23:18:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.037 23:18:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:37.037 23:18:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:37.037 23:18:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:37.037 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:37.037 23:18:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:37.038 23:18:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:37.038 23:18:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.038 23:18:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.038 23:18:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:37.038 23:18:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:37.038 23:18:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:37.038 23:18:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:37.038 23:18:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:37.038 23:18:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.038 23:18:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:37.038 23:18:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.038 23:18:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:37.038 Found net devices under 0000:af:00.0: cvl_0_0 00:32:37.038 23:18:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.038 23:18:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:37.038 23:18:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.038 23:18:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:37.038 23:18:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.038 23:18:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:37.038 Found net devices under 0000:af:00.1: cvl_0_1 00:32:37.038 23:18:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.038 23:18:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:37.038 23:18:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:37.038 23:18:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:37.038 23:18:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:37.038 23:18:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:37.038 23:18:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:37.038 23:18:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:37.038 23:18:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:37.038 23:18:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:37.038 23:18:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:37.038 23:18:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:37.038 23:18:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:37.038 23:18:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:37.038 23:18:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:37.038 23:18:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:37.038 23:18:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:37.038 23:18:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:37.038 23:18:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:37.038 23:18:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:37.038 23:18:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:37.038 23:18:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:37.038 23:18:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:37.038 23:18:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:37.038 23:18:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:37.038 23:18:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:37.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:37.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:32:37.038 00:32:37.038 --- 10.0.0.2 ping statistics --- 00:32:37.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.038 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:32:37.297 23:18:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:37.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:37.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:32:37.297 00:32:37.297 --- 10.0.0.1 ping statistics --- 00:32:37.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.297 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:32:37.297 23:18:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:37.297 23:18:09 -- nvmf/common.sh@410 -- # return 0 00:32:37.297 23:18:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:37.297 23:18:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:37.297 23:18:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:37.297 23:18:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:37.297 23:18:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:37.297 23:18:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:37.297 23:18:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:37.297 23:18:09 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:37.297 23:18:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:37.297 23:18:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:37.297 23:18:09 -- common/autotest_common.sh@10 -- # set +x 00:32:37.297 ************************************ 00:32:37.297 START TEST nvmf_target_disconnect_tc1 00:32:37.297 ************************************ 00:32:37.297 23:18:09 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:32:37.297 23:18:09 -- host/target_disconnect.sh@32 -- # set +e 00:32:37.297 23:18:09 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:37.297 EAL: No free 2048 kB hugepages reported on node 1 00:32:37.297 [2024-07-24 23:18:09.611520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.297 [2024-07-24 23:18:09.611879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.297 [2024-07-24 23:18:09.611894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ce40 with addr=10.0.0.2, port=4420 00:32:37.297 [2024-07-24 23:18:09.611921] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:37.297 [2024-07-24 23:18:09.611938] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:37.297 [2024-07-24 23:18:09.611947] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:32:37.297 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:32:37.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:37.297 Initializing NVMe Controllers 00:32:37.297 23:18:09 -- host/target_disconnect.sh@33 -- # trap - ERR 00:32:37.297 23:18:09 -- host/target_disconnect.sh@33 -- # print_backtrace 00:32:37.297 23:18:09 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:32:37.297 23:18:09 -- common/autotest_common.sh@1132 -- # return 0 00:32:37.297 23:18:09 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:32:37.297 23:18:09 -- host/target_disconnect.sh@41 -- # set -e 00:32:37.297 00:32:37.297 real 0m0.107s 00:32:37.297 user 0m0.035s 00:32:37.297 sys 0m0.072s 00:32:37.297 23:18:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:37.298 23:18:09 -- common/autotest_common.sh@10 -- # set +x 00:32:37.298 ************************************ 00:32:37.298 END TEST nvmf_target_disconnect_tc1 00:32:37.298 ************************************ 00:32:37.298 23:18:09 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:37.298 23:18:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:37.298 23:18:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:37.298 23:18:09 -- common/autotest_common.sh@10 -- # set +x 00:32:37.298 ************************************ 00:32:37.298 START TEST nvmf_target_disconnect_tc2 00:32:37.298 ************************************ 00:32:37.298 23:18:09 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:32:37.298 23:18:09 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:32:37.298 23:18:09 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:37.298 23:18:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:37.298 23:18:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:37.298 23:18:09 -- common/autotest_common.sh@10 -- # set +x 00:32:37.298 23:18:09 -- nvmf/common.sh@469 -- # nvmfpid=3413159 00:32:37.298 23:18:09 -- nvmf/common.sh@470 -- # waitforlisten 3413159 00:32:37.298 23:18:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:37.298 23:18:09 -- common/autotest_common.sh@819 -- # '[' -z 3413159 ']' 00:32:37.298 23:18:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:37.298 23:18:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:37.298 23:18:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:37.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:37.298 23:18:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:37.298 23:18:09 -- common/autotest_common.sh@10 -- # set +x 00:32:37.557 [2024-07-24 23:18:09.730314] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:37.557 [2024-07-24 23:18:09.730362] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:37.557 EAL: No free 2048 kB hugepages reported on node 1 00:32:37.557 [2024-07-24 23:18:09.822071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:37.557 [2024-07-24 23:18:09.860535] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:37.557 [2024-07-24 23:18:09.860651] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:37.557 [2024-07-24 23:18:09.860662] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:37.557 [2024-07-24 23:18:09.860671] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:37.557 [2024-07-24 23:18:09.860788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:32:37.557 [2024-07-24 23:18:09.860830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:32:37.557 [2024-07-24 23:18:09.860939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:32:37.557 [2024-07-24 23:18:09.860941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:32:38.124 23:18:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:38.124 23:18:10 -- common/autotest_common.sh@852 -- # return 0 00:32:38.124 23:18:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:38.124 23:18:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:38.124 23:18:10 -- common/autotest_common.sh@10 -- # set +x 00:32:38.383 23:18:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:38.383 23:18:10 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:38.383 23:18:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.383 23:18:10 -- common/autotest_common.sh@10 -- # set +x 00:32:38.383 Malloc0 00:32:38.383 23:18:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.383 23:18:10 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:38.383 23:18:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.383 23:18:10 -- common/autotest_common.sh@10 -- # set +x 00:32:38.383 [2024-07-24 23:18:10.581202] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:38.383 23:18:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.383 23:18:10 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:38.383 23:18:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.383 23:18:10 -- common/autotest_common.sh@10 -- # set +x 00:32:38.383 23:18:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.383 23:18:10 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:38.383 23:18:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.383 23:18:10 -- common/autotest_common.sh@10 -- # set +x 00:32:38.383 23:18:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.383 23:18:10 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:38.383 23:18:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.383 23:18:10 -- common/autotest_common.sh@10 -- # set +x 00:32:38.383 [2024-07-24 23:18:10.609429] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:38.383 23:18:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.383 23:18:10 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:38.383 23:18:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.383 23:18:10 -- common/autotest_common.sh@10 -- # set +x 00:32:38.383 23:18:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.383 23:18:10 -- host/target_disconnect.sh@50 -- # reconnectpid=3413441 00:32:38.383 23:18:10 -- host/target_disconnect.sh@52 -- # sleep 2 00:32:38.383 23:18:10 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:38.383 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.288 23:18:12 -- host/target_disconnect.sh@53 -- # kill -9 3413159 00:32:40.288 23:18:12 -- host/target_disconnect.sh@55 -- # sleep 2 00:32:40.288 Read completed with error (sct=0, sc=8) 00:32:40.288 starting I/O failed 00:32:40.288 Read completed with error (sct=0, sc=8) 00:32:40.288 starting I/O failed 00:32:40.288 Read completed with error (sct=0, sc=8) 00:32:40.288 starting I/O failed 00:32:40.288 Read completed with error (sct=0, sc=8) 00:32:40.288 starting I/O failed 00:32:40.288 Read completed with error (sct=0, sc=8) 00:32:40.288 starting I/O failed 00:32:40.288 Read completed with error (sct=0, sc=8) 00:32:40.288 starting I/O failed 00:32:40.288 Read completed with error (sct=0, sc=8) 00:32:40.288 starting I/O failed 00:32:40.288 Read completed with error (sct=0, sc=8) 00:32:40.288 starting I/O failed 00:32:40.288 Read completed with error (sct=0, sc=8) 00:32:40.288 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 [2024-07-24 23:18:12.635697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 [2024-07-24 23:18:12.635937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 [2024-07-24 23:18:12.636150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Read completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.289 starting I/O failed 00:32:40.289 Write completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Write completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Write completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Write completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Read completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Write completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Read completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Write completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Write completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Read completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Write completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Write completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Write completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Write completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Read completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Write completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Read completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Read completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Read completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Write completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Read completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Write completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Write completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Read completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Write completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 Read completed with error (sct=0, sc=8) 00:32:40.290 starting I/O failed 00:32:40.290 [2024-07-24 23:18:12.636366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:40.290 [2024-07-24 23:18:12.636739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.637034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.637079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.290 qpair failed and we were unable to recover it. 00:32:40.290 [2024-07-24 23:18:12.637463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.637856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.637897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.290 qpair failed and we were unable to recover it. 00:32:40.290 [2024-07-24 23:18:12.638269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.638629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.638668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.290 qpair failed and we were unable to recover it. 00:32:40.290 [2024-07-24 23:18:12.639095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.639349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.639388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.290 qpair failed and we were unable to recover it. 00:32:40.290 [2024-07-24 23:18:12.639703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.639934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.639974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.290 qpair failed and we were unable to recover it. 00:32:40.290 [2024-07-24 23:18:12.640273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.640638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.640678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.290 qpair failed and we were unable to recover it. 00:32:40.290 [2024-07-24 23:18:12.640900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.641130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.641169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.290 qpair failed and we were unable to recover it. 00:32:40.290 [2024-07-24 23:18:12.641505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.641859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.641900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.290 qpair failed and we were unable to recover it. 00:32:40.290 [2024-07-24 23:18:12.642310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.642533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.642572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.290 qpair failed and we were unable to recover it. 00:32:40.290 [2024-07-24 23:18:12.642873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.643238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.643277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.290 qpair failed and we were unable to recover it. 00:32:40.290 [2024-07-24 23:18:12.643665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.643973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.644014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.290 qpair failed and we were unable to recover it. 00:32:40.290 [2024-07-24 23:18:12.644395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.644694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.644744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.290 qpair failed and we were unable to recover it. 00:32:40.290 [2024-07-24 23:18:12.644985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.645232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.645249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.290 qpair failed and we were unable to recover it. 00:32:40.290 [2024-07-24 23:18:12.645589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.645865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.645905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.290 qpair failed and we were unable to recover it. 00:32:40.290 [2024-07-24 23:18:12.646208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.646520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.646560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.290 qpair failed and we were unable to recover it. 00:32:40.290 [2024-07-24 23:18:12.646926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.647255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.647294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.290 qpair failed and we were unable to recover it. 00:32:40.290 [2024-07-24 23:18:12.647608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.647976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.648016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.290 qpair failed and we were unable to recover it. 00:32:40.290 [2024-07-24 23:18:12.648262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.648564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.648604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.290 qpair failed and we were unable to recover it. 00:32:40.290 [2024-07-24 23:18:12.648968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.649201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.649218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.290 qpair failed and we were unable to recover it. 00:32:40.290 [2024-07-24 23:18:12.649571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.649954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.649994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.290 qpair failed and we were unable to recover it. 00:32:40.290 [2024-07-24 23:18:12.650299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.650640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.290 [2024-07-24 23:18:12.650680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.290 qpair failed and we were unable to recover it. 00:32:40.290 [2024-07-24 23:18:12.650983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.651217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.651256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.651575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.651943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.651983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.652346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.652663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.652703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.653024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.653313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.653330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.653644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.653937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.653955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.654278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.654597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.654614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.654938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.655190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.655207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.655539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.655800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.655817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.656117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.656360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.656377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.656588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.656907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.656924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.657118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.657360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.657377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.657707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.658021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.658039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.658392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.658746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.658786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.659164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.659504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.659543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.659867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.660222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.660261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.660646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.660968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.660985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.661235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.661508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.661554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.661954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.662322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.662363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.662671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.663018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.663059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.663435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.663802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.663843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.664219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.664524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.664563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.664848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.665142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.665181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.665578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.665943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.665983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.666359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.666663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.666702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.667092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.667436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.667453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.667708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.668046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.668087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.668385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.668666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.668711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.291 qpair failed and we were unable to recover it. 00:32:40.291 [2024-07-24 23:18:12.669041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.669387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.291 [2024-07-24 23:18:12.669427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.669713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.670107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.670147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.670434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.670755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.670773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.671096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.671320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.671358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.671712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.672063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.672102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.672477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.672782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.672822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.673198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.673498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.673537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.673890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.674256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.674295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.674672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.675051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.675091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.675473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.675793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.675817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.676099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.676391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.676430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.676723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.677000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.677018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.677335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.677524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.677568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.677948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.678240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.678279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.678632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.678925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.678965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.679285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.679527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.679566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.679849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.680232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.680272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.680564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.680907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.680948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.681243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.681576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.681615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.681988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.682322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.682367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.682670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.683039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.683080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.683456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.683822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.683862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.684219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.684591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.684630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.685005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.685376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.685415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.685731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.686096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.686136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.686510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.686855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.686894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.292 [2024-07-24 23:18:12.687269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.687612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.292 [2024-07-24 23:18:12.687651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.292 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.688019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.688381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.688398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.688631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.688993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.689034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.689412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.689781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.689822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.690130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.690498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.690537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.690915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.691260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.691300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.691675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.692029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.692069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.692447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.692756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.692798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.693167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.693529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.693569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.693931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.694227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.694267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.694619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.694901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.694942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.695254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.695652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.695692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.696086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.696364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.696381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.696615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.696948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.696989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.697313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.697678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.697729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.698020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.698397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.698436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.698765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.698966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.698984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.699249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.699636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.699676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.700013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.700292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.700331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.700638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.701008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.701048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.701402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.701696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.701745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.702052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.702381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.702421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.702758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.703098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.293 [2024-07-24 23:18:12.703138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.293 qpair failed and we were unable to recover it. 00:32:40.293 [2024-07-24 23:18:12.703364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.703670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.703710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.294 qpair failed and we were unable to recover it. 00:32:40.294 [2024-07-24 23:18:12.704103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.704447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.704465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.294 qpair failed and we were unable to recover it. 00:32:40.294 [2024-07-24 23:18:12.704702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.704953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.704971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.294 qpair failed and we were unable to recover it. 00:32:40.294 [2024-07-24 23:18:12.705314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.705630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.705669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.294 qpair failed and we were unable to recover it. 00:32:40.294 [2024-07-24 23:18:12.706024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.706376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.706415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.294 qpair failed and we were unable to recover it. 00:32:40.294 [2024-07-24 23:18:12.706739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.707096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.707135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.294 qpair failed and we were unable to recover it. 00:32:40.294 [2024-07-24 23:18:12.707522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.707874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.707914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.294 qpair failed and we were unable to recover it. 00:32:40.294 [2024-07-24 23:18:12.708300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.708595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.708635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.294 qpair failed and we were unable to recover it. 00:32:40.294 [2024-07-24 23:18:12.708970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.709314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.709354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.294 qpair failed and we were unable to recover it. 00:32:40.294 [2024-07-24 23:18:12.709656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.710036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.710095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.294 qpair failed and we were unable to recover it. 00:32:40.294 [2024-07-24 23:18:12.710416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.710655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.710673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.294 qpair failed and we were unable to recover it. 00:32:40.294 [2024-07-24 23:18:12.711019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.711370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.711388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.294 qpair failed and we were unable to recover it. 00:32:40.294 [2024-07-24 23:18:12.711677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.712034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.712076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.294 qpair failed and we were unable to recover it. 00:32:40.294 [2024-07-24 23:18:12.712428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.712804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.712865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.294 qpair failed and we were unable to recover it. 00:32:40.294 [2024-07-24 23:18:12.713270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.713556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.713575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.294 qpair failed and we were unable to recover it. 00:32:40.294 [2024-07-24 23:18:12.713838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.714159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.714199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.294 qpair failed and we were unable to recover it. 00:32:40.294 [2024-07-24 23:18:12.714561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.714914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.294 [2024-07-24 23:18:12.714971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.294 qpair failed and we were unable to recover it. 00:32:40.294 [2024-07-24 23:18:12.715377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.715641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.715659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.716026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.716306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.716324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.716631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.716932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.716951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.717312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.717660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.717700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.718097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.718384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.718424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.718806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.719155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.719196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.719518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.719879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.719898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.720227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.720559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.720599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.720908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.721286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.721326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.721702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.722031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.722071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.722450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.722825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.722865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.723231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.723546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.723586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.723942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.724287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.724327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.724627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.724908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.724950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.725312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.725694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.725752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.726062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.726436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.726461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.726802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.727050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.727069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.727400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.727769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.727810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.728170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.728496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.728535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.728897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.729272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.729311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.729690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.730026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.730067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.730376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.730689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.730736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.731031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.731405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.731444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.731825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.732106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.732146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.732395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.732692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.732754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.733079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.733396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.733436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.733835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.734145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.561 [2024-07-24 23:18:12.734184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.561 qpair failed and we were unable to recover it. 00:32:40.561 [2024-07-24 23:18:12.734492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.734852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.734893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.735280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.735680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.735728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.736090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.736291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.736309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.736670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.736918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.736958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.737252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.737604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.737644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.738057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.738412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.738453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.738815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.739035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.739075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.739472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.739820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.739860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.740251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.740622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.740662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.740929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.741234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.741274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.741656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.742011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.742051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.742374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.742698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.742749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.743098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.743478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.743517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.743820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.744055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.744093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.744422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.744795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.744836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.745234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.745642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.745683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.745984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.746289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.746336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.746643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.746937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.746978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.747340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.747655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.747694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.748091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.748388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.748406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.748647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.748963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.749004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.749342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.749588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.749629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.749990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.750369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.750409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.750807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.751116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.751155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.751478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.751786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.751827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.752215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.752639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.752680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.753015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.753354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.753373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.753633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.753989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.754031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.562 qpair failed and we were unable to recover it. 00:32:40.562 [2024-07-24 23:18:12.754403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.562 [2024-07-24 23:18:12.754758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.754800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.755095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.755286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.755326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.755619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.755982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.756023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.756374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.756676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.756739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.757033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.757433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.757473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.757807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.758139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.758181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.758433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.758738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.758780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.759112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.759482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.759523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.759891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.760208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.760249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.760639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.760968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.761011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.761442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.761797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.761838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.762227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.762524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.762564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.762948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.763251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.763269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.763522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.763782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.763801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.764113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.764376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.764395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.764738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.764995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.765014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.765277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.765580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.765599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.765921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.766227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.766247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.766532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.766793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.766834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.767227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.767651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.767692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.768107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.768407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.768426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.768769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.769090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.769130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.769434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.769694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.769723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.769994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.770250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.770300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.770686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.771099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.771141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.771547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.771934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.771976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.772383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.772667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.772707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.773048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.773430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.773472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.563 qpair failed and we were unable to recover it. 00:32:40.563 [2024-07-24 23:18:12.773801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.774110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.563 [2024-07-24 23:18:12.774150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.774536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.774940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.774987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.775324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.775609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.775650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.776070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.776373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.776414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.776737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.777041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.777082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.777345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.777664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.777705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.778112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.778369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.778417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.778729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.779112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.779152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.779555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.779934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.779984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.780269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.780662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.780703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.781051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.781360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.781411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.781580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.781940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.781994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.782315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.782616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.782657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.782959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.783200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.783218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.783563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.783917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.783960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.784304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.784625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.784666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.785046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.785404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.785445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.785738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.786097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.786138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.786409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.786640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.786682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.787028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.787342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.787382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.787692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.788017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.788058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.788456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.788813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.788862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.789186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.789595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.789635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.790054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.790453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.790472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.790790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.791141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.791182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.791489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.791774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.791815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.792207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.792534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.792576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.792966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.793377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.793419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.793800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.794132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.564 [2024-07-24 23:18:12.794173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.564 qpair failed and we were unable to recover it. 00:32:40.564 [2024-07-24 23:18:12.794475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.794813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.794855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.795236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.795515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.795556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.795961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.796208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.796256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.796646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.797054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.797097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.797411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.797702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.797759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.798146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.798435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.798477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.798784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.799053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.799072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.799357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.799713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.799765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.800102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.800473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.800515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.800832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.801137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.801184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.801372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.801654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.801694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.802025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.802315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.802334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.802676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.803010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.803052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.803412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.803704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.803756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.804001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.804374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.804415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.804788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.805144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.805185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.805500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.805765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.805785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.806043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.806277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.806296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.806635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.806974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.806993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.807196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.807603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.807643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.808005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.808250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.808270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.808460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.808727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.808746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.808935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.809239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.565 [2024-07-24 23:18:12.809279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.565 qpair failed and we were unable to recover it. 00:32:40.565 [2024-07-24 23:18:12.809593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.809708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.809762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.810116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.810363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.810381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.810707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.811027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.811068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.811456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.811581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.811623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.811874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.812183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.812223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.812591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.812999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.813040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.813444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.813824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.813865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.814238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.814463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.814504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.814854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.815134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.815175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.815462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.815810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.815852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.816168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.816477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.816517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.816795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.817108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.817148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.817452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.817736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.817778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.818067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.818361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.818406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.818736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.819043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.819083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.819316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.819632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.819672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.820082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.820363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.820402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.820796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.821178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.821218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.821507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.821737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.821779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.822146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.822366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.822407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.822706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.823001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.823041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.823296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.823690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.823740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.824033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.824393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.824432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.824765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.825043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.825083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.825331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.825684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.825733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.826102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.826451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.826491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.826799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.827096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.827136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.827516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.827915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.827957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.566 qpair failed and we were unable to recover it. 00:32:40.566 [2024-07-24 23:18:12.828279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.828535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.566 [2024-07-24 23:18:12.828577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.828836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.829123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.829164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.829500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.829868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.829909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.830212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.830581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.830620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.830942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.831359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.831399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.831785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.832158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.832199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.832612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.832936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.832955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.833314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.833683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.833734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.834121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.834350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.834390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.834641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.834950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.834991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.835373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.835749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.835790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.836046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.836279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.836297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.836554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.836850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.836892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.837278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.837690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.837741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.838125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.838343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.838383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.838766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.839121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.839161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.839528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.839930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.839972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.840202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.840512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.840552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.840942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.841178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.841218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.841538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.841954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.841995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.842379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.842673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.842691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.842978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.843355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.843395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.843760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.844045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.844086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.844478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.844854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.844896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.845283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.845586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.845625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.846006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.846242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.846282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.846601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.846974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.847016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.847333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.847525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.847544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.847855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.848050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.567 [2024-07-24 23:18:12.848069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.567 qpair failed and we were unable to recover it. 00:32:40.567 [2024-07-24 23:18:12.848313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.848691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.848744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.849058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.849413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.849454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.849726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.850060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.850100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.850469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.850845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.850886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.851267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.851543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.851583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.851876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.852171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.852211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.852622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.853012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.853054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.853452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.853745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.853787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.854046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.854423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.854463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.854778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.855149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.855190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.855446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.855826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.855867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.856228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.856646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.856686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.856950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.857260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.857300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.857627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.857950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.857991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.858230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.858516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.858560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.858817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.859185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.859225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.859471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.859775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.859816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.860120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.860429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.860481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.860849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.861192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.861233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.861541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.861941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.861982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.862279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.862518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.862559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.862962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.863332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.863374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.863686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.864040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.864081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.864444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.864827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.864869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.865179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.865553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.865593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.865980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.866210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.866251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.866654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.866984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.867027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.867416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.867744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.867786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.568 qpair failed and we were unable to recover it. 00:32:40.568 [2024-07-24 23:18:12.868149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.568 [2024-07-24 23:18:12.868518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.868560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.868869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.869179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.869198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.869527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.869882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.869924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.870251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.870609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.870650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.870970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.871273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.871313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.871698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.872084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.872125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.872532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.872903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.872945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.873310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.873690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.873741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.874051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.874262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.874302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.874687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.874988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.875030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.875347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.875709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.875759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.876132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.876516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.876556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.876872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.877247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.877287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.877675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.878083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.878124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.878542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.878901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.878942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.879280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.879643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.879684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.880063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.880440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.880480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.880836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.881041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.881060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.881409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.881786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.881829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.882219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.882567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.882608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.882908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.883263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.883303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.883668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.884039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.884080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.884469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.884803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.884845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.885233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.885471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.885512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.885882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.886264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.886305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.886648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.886996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.887021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.887359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.887722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.887742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.888054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.888429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.888448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.569 qpair failed and we were unable to recover it. 00:32:40.569 [2024-07-24 23:18:12.888782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.889047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.569 [2024-07-24 23:18:12.889066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.889396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.889641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.889660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.889976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.890312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.890330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.890694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.891041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.891060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.891416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.891685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.891736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.892060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.892439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.892479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.892848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.893161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.893180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.893516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.893875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.893896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.894090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.894400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.894419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.894762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.895136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.895176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.895540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.895916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.895934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.896201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.896557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.896576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.896783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.897123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.897164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.897529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.897923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.897942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.898206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.898567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.898585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.898844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.899083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.899101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.899414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.899728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.899748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.900060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.900391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.900412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.900787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.901122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.901162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.901461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.901847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.901866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.902180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.902538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.902557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.902825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.903120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.903161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.903512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.903888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.903907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.904174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.904421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.904440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.570 [2024-07-24 23:18:12.904727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.905063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.570 [2024-07-24 23:18:12.905081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.570 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.905398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.905775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.905794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.906002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.906284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.906325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.906623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.906938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.906957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.907274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.907607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.907626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.907987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.908299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.908318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.908623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.908977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.909019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.909408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.909744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.909763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.910062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.910381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.910399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.910778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.911103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.911122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.911426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.911734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.911754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.912119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.912351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.912391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.912759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.913095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.913113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.913444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.913808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.913827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.914098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.914408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.914427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.914755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.915054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.915095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.915399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.915774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.915793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.916124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.916486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.916505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.916845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.917173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.917192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.917552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.917817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.917836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.918149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.918388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.918406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.918743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.918995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.919014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.919301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.919613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.919632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.919910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.920247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.920265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.920601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.920890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.920932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.921323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.921608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.921626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.921971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.922246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.922264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.922506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.922702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.922725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.923007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.923287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.923327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.571 qpair failed and we were unable to recover it. 00:32:40.571 [2024-07-24 23:18:12.923629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.571 [2024-07-24 23:18:12.923995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.924036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.924357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.924732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.924775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.925140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.925470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.925490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.925728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.926072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.926112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.926524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.926906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.926948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.927290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.927669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.927688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.928018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.928224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.928242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.928553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.928906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.928948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.929282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.929658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.929699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.930019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.930283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.930302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.930558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.930906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.930925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.931267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.931520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.931539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.931884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.932253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.932293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.932650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.932978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.932997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.933360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.933622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.933641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.934001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.934262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.934281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.934625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.934994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.935035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.935414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.935744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.935785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.936174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.936571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.936613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.936923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.937246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.937287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.937696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.938080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.938099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.938418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.938695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.938713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.939112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.939356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.939375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.939694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.940034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.940054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.940421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.940799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.940840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.941162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.941448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.941488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.941882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.942213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.942231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.942564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.942920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.942938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.572 qpair failed and we were unable to recover it. 00:32:40.572 [2024-07-24 23:18:12.943277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.943580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.572 [2024-07-24 23:18:12.943599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.943947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.944300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.944319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.944665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.944928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.944947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.945288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.945642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.945660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.945975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.946228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.946247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.946523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.946903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.946922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.947284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.947585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.947625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.947882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.948215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.948234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.948588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.948930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.948972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.949227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.949536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.949577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.949964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.950158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.950176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.950513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.950826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.950845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.951126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.951402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.951421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.951664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.952004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.952023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.952383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.952694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.952713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.953065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.953426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.953467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.953851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.954189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.954208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.954455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.954787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.954806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.955071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.955415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.955434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.955794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.956136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.956178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.956487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.956794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.956828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.957091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.957426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.957445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.957800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.958110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.958128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.958345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.958626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.958667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.959073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.959372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.959412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.959721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.959961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.959980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.960316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.960673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.960692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.960958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.961310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.961329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.961674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.962062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.573 [2024-07-24 23:18:12.962103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.573 qpair failed and we were unable to recover it. 00:32:40.573 [2024-07-24 23:18:12.962523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.962894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.962913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.963247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.963579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.963597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.963776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.964111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.964130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.964450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.964794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.964814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.965081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.965348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.965367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.965618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.965945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.965964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.966291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.966546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.966565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.966854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.967199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.967240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.967623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.967935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.967978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.968274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.968612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.968652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.969033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.969284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.969303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.969512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23239f0 is same with the state(5) to be set 00:32:40.574 [2024-07-24 23:18:12.970027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.970390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.970439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.970869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.971192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.971212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.971569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.971831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.971851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.972179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.972464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.972483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.972824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.973072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.973091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.973418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.973784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.973803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.974015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.974331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.974350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.974664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.975001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.975021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.975300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.975619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.975660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.976062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.976369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.976387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.976728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.977002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.977021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.977289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.977469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.977487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.977683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.978037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.574 [2024-07-24 23:18:12.978056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.574 qpair failed and we were unable to recover it. 00:32:40.574 [2024-07-24 23:18:12.978394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.575 [2024-07-24 23:18:12.978708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.575 [2024-07-24 23:18:12.978733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.575 qpair failed and we were unable to recover it. 00:32:40.575 [2024-07-24 23:18:12.979068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.575 [2024-07-24 23:18:12.979353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.575 [2024-07-24 23:18:12.979372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.575 qpair failed and we were unable to recover it. 00:32:40.575 [2024-07-24 23:18:12.979614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.575 [2024-07-24 23:18:12.979932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.575 [2024-07-24 23:18:12.979951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.575 qpair failed and we were unable to recover it. 00:32:40.575 [2024-07-24 23:18:12.980223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.575 [2024-07-24 23:18:12.980482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.575 [2024-07-24 23:18:12.980501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.575 qpair failed and we were unable to recover it. 00:32:40.575 [2024-07-24 23:18:12.980817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.575 [2024-07-24 23:18:12.981154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.575 [2024-07-24 23:18:12.981173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.575 qpair failed and we were unable to recover it. 00:32:40.575 [2024-07-24 23:18:12.981494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.575 [2024-07-24 23:18:12.981794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.575 [2024-07-24 23:18:12.981835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.575 qpair failed and we were unable to recover it. 00:32:40.575 [2024-07-24 23:18:12.982199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.575 [2024-07-24 23:18:12.982534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.575 [2024-07-24 23:18:12.982574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.575 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.982911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.983296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.983315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.983632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.983892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.983912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.984174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.984508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.984526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.984785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.985145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.985185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.985478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.985837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.985879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.986193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.986499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.986539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.986944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.987216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.987235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.987534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.987890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.987932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.988260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.988547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.988588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.988976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.989260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.989301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.989601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.989965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.990007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.990403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.990790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.990832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.991247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.991549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.991590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.991975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.992380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.992421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.992729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.993103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.993143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.993458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.993761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.993781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.994006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.994194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.994213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.994572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.994802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.994849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.995186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.995567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.995608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.995937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.996281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.996321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.996713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.997127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.997169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.997539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.997915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.997957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.998267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.998501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.998541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.998936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.999314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:12.999355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:12.999743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:13.000078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:13.000119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:13.000484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:13.000857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:13.000877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.851 qpair failed and we were unable to recover it. 00:32:40.851 [2024-07-24 23:18:13.001141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:13.001578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.851 [2024-07-24 23:18:13.001619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.001934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.002294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.002313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.002596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.002981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.003023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.003400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.003687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.003737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.004128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.004507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.004547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.004946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.005344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.005384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.005693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.006003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.006022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.006285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.006587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.006628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.007013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.007356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.007374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.007730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.008012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.008032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.008356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.008732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.008774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.009065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.009374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.009415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.009807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.010091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.010133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.010469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.010797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.010840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.011227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.011529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.011570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.011884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.012257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.012299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.012646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.012990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.013032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.013399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.013686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.013738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.014064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.014235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.014254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.014518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.014866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.014910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.015218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.015527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.015571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.015885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.016254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.016295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.016697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.017008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.017050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.017437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.017824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.017865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.018125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.018415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.018456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.018756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.019055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.019097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.019414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.019741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.019784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.020174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.020607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.020648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.021037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.021343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.852 [2024-07-24 23:18:13.021384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.852 qpair failed and we were unable to recover it. 00:32:40.852 [2024-07-24 23:18:13.021756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.022141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.022183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.022594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.022895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.022915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.023171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.023554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.023596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.024003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.024371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.024413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.024810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.025184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.025225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.025561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.025876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.025918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.026210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.026527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.026567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.026940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.027302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.027344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.027607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.027969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.028010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.028309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.028549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.028590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.028944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.029275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.029317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.029621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.029928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.029947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.030211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.030529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.030570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.030939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.031300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.031347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.031590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.031941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.031983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.032353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.032711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.032762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.033103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.033501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.033542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.033848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.034158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.034199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.034635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.034932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.034952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.035238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.035524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.035565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.035885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.036109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.036150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.036473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.036791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.036832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.037150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.037511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.037551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.037927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.038244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.038285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.038634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.038909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.038929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.039248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.039632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.039674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.040024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.040306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.040345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.040678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.041030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.041072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.853 [2024-07-24 23:18:13.041401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.041721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.853 [2024-07-24 23:18:13.041740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.853 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.042113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.042399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.042440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.042827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.043143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.043185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.043450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.043783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.043827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.044032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.044309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.044350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.044670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.045093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.045135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.045525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.045955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.045997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.046317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.046671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.046712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.046956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.047220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.047239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.047572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.047941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.047985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.048297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.048615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.048656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.049048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.049453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.049472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.049817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.050085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.050128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.050401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.050781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.050821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.051125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.051318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.051338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.051607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.051846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.051865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.052121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.052452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.052493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.052821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.053118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.053159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.053558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.053935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.053977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.054294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.054664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.054706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.055115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.055422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.055463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.055848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.056104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.056149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.056539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.056863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.056905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.057202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.057607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.057648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.057958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.058205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.058224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.058628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.058875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.058894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.059158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.059379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.059421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.059685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.060034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.060075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.060490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.060799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.854 [2024-07-24 23:18:13.060841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.854 qpair failed and we were unable to recover it. 00:32:40.854 [2024-07-24 23:18:13.061153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.061478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.061521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.061932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.062233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.062252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.062532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.062903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.062945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.063286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.063519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.063560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.063864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.064244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.064285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.064675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.065047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.065089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.065467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.065851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.065894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.066312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.066689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.066747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.067106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.067415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.067455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.067838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.068194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.068212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.068428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.068742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.068763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.069027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.069340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.069381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.069758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.069977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.070018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.070322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.070708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.070760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.071028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.071254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.071295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.071613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.071998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.072044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.072309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.072710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.072758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.073026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.073219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.073242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.073483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.073692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.073711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.073913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.074233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.074274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.074644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.075016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.075061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.075382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.075648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.075689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.075960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.076270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.855 [2024-07-24 23:18:13.076289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.855 qpair failed and we were unable to recover it. 00:32:40.855 [2024-07-24 23:18:13.076571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.076928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.076972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.077283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.077583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.077624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.077955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.078285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.078325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.078680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.079008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.079050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.079465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.079823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.079865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.080258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.080626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.080667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.081014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.081255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.081298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.081705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.082019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.082061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.082380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.082639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.082687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.082945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.083177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.083218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.083536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.083936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.083977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.084301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.084544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.084584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.084968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.085277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.085319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.085694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.085989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.086008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.086333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.086711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.086761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.087064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.087369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.087410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.087791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.088160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.088200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.088577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.088967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.089009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.089329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.089577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.089618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.089977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.090307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.090348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.090723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.091107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.091147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.091552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.091918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.091959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.092325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.092708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.092757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.093092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.093393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.093435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.093822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.094113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.094154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.094398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.094709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.094733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.094958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.095297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.095337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.095735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.096063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.096105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.856 [2024-07-24 23:18:13.096418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.096776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.856 [2024-07-24 23:18:13.096818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.856 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.097072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.097333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.097352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.097701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.098030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.098072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.098411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.098680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.098751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.099144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.099369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.099409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.099797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.100105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.100124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.100402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.100668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.100709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.101129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.101467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.101508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.101846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.102205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.102246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.102634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.103015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.103056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.103425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.103784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.103826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.104218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.104604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.104645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.105053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.105413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.105454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.105754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.106140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.106180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.106573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.106952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.106994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.107378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.107738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.107780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.108143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.108407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.108427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.108744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.109123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.109170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.109564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.109909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.109951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.110287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.110684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.110734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.111134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.111460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.111501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.111822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.112132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.112174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.112559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.112934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.112976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.113241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.113533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.113551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.113896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.114247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.114288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.114655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.115060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.115102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.115420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.115736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.115777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.116166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.116473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.116513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.116845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.117175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.117217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.857 qpair failed and we were unable to recover it. 00:32:40.857 [2024-07-24 23:18:13.117578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.117969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.857 [2024-07-24 23:18:13.118011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.118324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.118615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.118655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.119065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.119430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.119471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.119814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.120218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.120258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.120644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.120911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.120954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.121181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.121493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.121534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.121833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.122119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.122159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.122404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.122790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.122832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.123123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.123357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.123397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.123796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.124098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.124139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.124496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.124819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.124862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.125219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.125644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.125686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.126019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.126403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.126443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.126793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.127099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.127141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.127529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.127933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.127974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.128288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.128514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.128554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.128882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.129175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.129219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.129559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.129894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.129914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.130133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.130487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.130528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.130839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.131131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.131171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.131543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.131809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.131852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.132156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.132456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.132497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-07-24 23:18:13.132906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.133195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.858 [2024-07-24 23:18:13.133236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.133557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.133945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.133988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.134383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.134696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.134861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.135184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.135520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.135561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.135904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.136292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.136333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.136747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.137056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.137097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.137500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.137872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.137913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.138261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.138530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.138549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.138897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.139230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.139250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.139541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.139885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.139926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.140244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.140507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.140526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.140832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.141049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.141068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.141394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.141750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.141771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.142041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.142368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.142388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.142735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.143079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.143098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.143414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.143622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.143663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.144009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.144406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.144425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.144683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.144993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.145015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.145217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.145537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.145555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.145847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.146111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.146152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.146535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.146875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.146895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.147185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.147515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.147534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.147776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.148094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.148113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.148447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.148808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.148827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.149098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.149301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.149320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.149593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.149925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.149945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.150295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.150553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.150572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-07-24 23:18:13.150852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.151097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.859 [2024-07-24 23:18:13.151137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.151464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.151861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.151903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.152158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.152460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.152479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.152813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.153132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.153151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.153493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.153824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.153844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.154054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.154266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.154285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.154555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.154856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.154875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.155153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.155333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.155351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.155672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.155918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.155937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.156113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.156388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.156427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.156824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.157086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.157129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.157523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.157851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.157870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.158115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.158388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.158407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.158762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.159012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.159053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.159372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.159712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.159762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.160067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.160387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.160406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.160687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.160973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.160993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.161279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.161623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.161642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.161910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.162199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.162240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.162569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.162887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.162929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.163291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.163653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.163672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.163938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.164214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.164233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.164526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.164883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.164902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.165085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.165403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.165443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.165809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.166061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.860 [2024-07-24 23:18:13.166101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.860 qpair failed and we were unable to recover it. 00:32:40.860 [2024-07-24 23:18:13.166342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.166596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.166614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.166957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.167240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.167259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.167565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.167965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.168007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.168266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.168680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.168748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.169069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.169491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.169510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.169852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.170144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.170164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.170435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.170755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.170774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.171087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.171299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.171318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.171613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.171902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.171922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.172260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.172618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.172636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.172976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.173336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.173377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.173708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.174037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.174078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.174446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.174795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.174814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.175077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.175320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.175339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.175629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.175831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.175850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.176163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.176396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.176437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.176811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.177163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.177211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.177560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.177879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.177898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.178225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.178524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.178543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.178795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.179053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.179072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.179335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.179646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.179665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.179925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.180174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.180193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.180447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.180798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.180818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.181073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.181333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.181352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.181677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.181949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.181968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.182192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.182465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.182484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.182752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.183029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.183048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.183297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.183497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.183515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.861 qpair failed and we were unable to recover it. 00:32:40.861 [2024-07-24 23:18:13.183874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.184139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.861 [2024-07-24 23:18:13.184157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.184506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.184865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.184884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.185172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.185453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.185472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.185787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.186124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.186143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.186407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.186654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.186673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.186927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.187266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.187284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.187650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.187898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.187917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.188176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.188445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.188464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.188748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.189009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.189028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.189297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.189537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.189555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.189816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.190153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.190172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.190535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.190743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.190762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.190950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.191234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.191253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.191507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.191770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.191788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.192103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.192381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.192399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.192600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.192862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.192881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.193182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.193398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.193418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.193762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.194099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.194118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.194471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.194823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.194841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.195187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.195502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.195520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.195824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.196177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.196195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.196421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.196618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.196636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.196999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.197276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.197294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.197632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.197878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.197897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.198217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.198554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.198572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.198931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.199273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.199292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.199578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.199771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.199791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.200053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.200368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.200388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.200659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.200904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.200923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.862 qpair failed and we were unable to recover it. 00:32:40.862 [2024-07-24 23:18:13.201242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.862 [2024-07-24 23:18:13.201606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.201625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.201963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.202222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.202242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.202435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.202768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.202787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.203019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.203345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.203364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.203607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.203873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.203892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.204214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.204472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.204491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.204820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.205012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.205031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.205347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.205671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.205689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.206072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.206286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.206304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.206612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.206868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.206887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.207214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.207536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.207558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.207758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.208081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.208100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.208451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.208796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.208816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.209127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.209459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.209478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.209854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.210185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.210204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.210481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.210818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.210838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.211176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.211539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.211557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.211875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.212209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.212228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.212588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.212846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.212865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.213132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.213466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.213485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.213847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.214187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.214209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.214573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.214932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.214951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.215290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.215564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.215582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.215940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.216199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.216219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.216548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.216866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.216885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.217154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.217428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.217446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.217711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.218000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.218020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.218285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.218618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.218637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.218992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.219321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.219340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.863 qpair failed and we were unable to recover it. 00:32:40.863 [2024-07-24 23:18:13.219676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.863 [2024-07-24 23:18:13.220015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.220035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.220290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.220661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.220679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.221015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.221353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.221372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.221737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.222082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.222101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.222365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.222652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.222671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.222986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.223313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.223331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.223645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.223979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.223998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.224320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.224566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.224608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.224994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.225392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.225433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.225827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.226159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.226178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.226495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.226773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.226792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.227060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.227370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.227389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.227633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.227967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.227986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.228306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.228584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.228603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.228953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.229271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.229290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.229637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.229992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.230012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.230325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.230703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.230735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.231008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.231344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.231362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.231726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.231992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.232011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.232348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.232662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.232681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.233033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.233371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.233390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.233764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.234023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.234043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.234321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.234657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.234676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.235036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.235297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.235315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.235643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.235979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.235998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.236362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.236627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.236645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.236923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.237186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.237205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.237541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.237825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.237844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.238180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.238495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.864 [2024-07-24 23:18:13.238513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.864 qpair failed and we were unable to recover it. 00:32:40.864 [2024-07-24 23:18:13.238795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.239126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.239145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.239520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.239857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.239877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.240195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.240540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.240558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.240763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.241077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.241096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.241434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.241672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.241691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.241988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.242308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.242327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.242708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.243036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.243056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.243322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.243644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.243662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.243919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.244247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.244266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.244629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.244964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.244983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.245347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.245658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.245676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.246015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.246372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.246391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.246734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.247047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.247066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.247442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.247727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.247750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.248039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.248317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.248337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.248650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.248978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.248997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.249239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.249574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.249593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.249941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.250301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.250320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.250664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.251019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.251039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.251309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.251641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.251660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.251975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.252251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.252269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.252627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.252942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.252961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.253334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.253596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.253614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.865 [2024-07-24 23:18:13.253949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.254203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.865 [2024-07-24 23:18:13.254221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.865 qpair failed and we were unable to recover it. 00:32:40.866 [2024-07-24 23:18:13.254566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.254875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.254894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.866 qpair failed and we were unable to recover it. 00:32:40.866 [2024-07-24 23:18:13.255231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.255465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.255485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.866 qpair failed and we were unable to recover it. 00:32:40.866 [2024-07-24 23:18:13.255758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.256011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.256031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.866 qpair failed and we were unable to recover it. 00:32:40.866 [2024-07-24 23:18:13.256345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.256671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.256689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.866 qpair failed and we were unable to recover it. 00:32:40.866 [2024-07-24 23:18:13.257013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.257347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.257366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.866 qpair failed and we were unable to recover it. 00:32:40.866 [2024-07-24 23:18:13.257688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.258070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.258089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.866 qpair failed and we were unable to recover it. 00:32:40.866 [2024-07-24 23:18:13.258413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.258753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.258773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.866 qpair failed and we were unable to recover it. 00:32:40.866 [2024-07-24 23:18:13.259004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.259288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.259308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.866 qpair failed and we were unable to recover it. 00:32:40.866 [2024-07-24 23:18:13.259579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.259939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.259958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.866 qpair failed and we were unable to recover it. 00:32:40.866 [2024-07-24 23:18:13.260239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.260553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.260572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.866 qpair failed and we were unable to recover it. 00:32:40.866 [2024-07-24 23:18:13.260852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.261230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.261249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.866 qpair failed and we were unable to recover it. 00:32:40.866 [2024-07-24 23:18:13.261552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.261886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.261905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.866 qpair failed and we were unable to recover it. 00:32:40.866 [2024-07-24 23:18:13.262179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.262435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.262454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.866 qpair failed and we were unable to recover it. 00:32:40.866 [2024-07-24 23:18:13.262777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.263040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.263059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.866 qpair failed and we were unable to recover it. 00:32:40.866 [2024-07-24 23:18:13.263407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.263668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.263687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.866 qpair failed and we were unable to recover it. 00:32:40.866 [2024-07-24 23:18:13.263939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.264183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.264202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.866 qpair failed and we were unable to recover it. 00:32:40.866 [2024-07-24 23:18:13.264454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.264712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.264738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.866 qpair failed and we were unable to recover it. 00:32:40.866 [2024-07-24 23:18:13.264938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.265131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.866 [2024-07-24 23:18:13.265150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:40.866 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.265464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.265755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.265774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.265985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.266226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.266245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.266575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.266882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.266902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.267168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.267464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.267483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.267739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.268024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.268044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.268383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.268741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.268763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.269107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.269403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.269420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.269760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.269967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.269984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.270309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.270500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.270518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.270766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.271062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.271101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.272916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.273291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.273311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.273637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.273942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.273960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.274213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.274499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.274516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.274824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.275178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.275195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.275460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.275728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.275746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.276008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.276193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.276210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.276458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.276768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.276785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.277124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.277400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.277417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.277759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.278074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.278091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.278370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.278706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.278745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.279012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.279251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.279272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.279538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.279868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.279886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.280220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.280536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.280558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.280896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.281060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.281077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.281340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.281681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.281698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.282043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.282331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.282348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.282647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.282984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.283002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.283290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.283496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.283512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.283780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.284054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.284071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.284339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.284677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.284694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.285048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.285255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.285271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.285623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.285824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.285841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.286025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.286354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.286371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.286600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.286880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.286898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.287110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.287390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.287407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.287678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.288037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.288055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.288399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.288656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.288672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.289051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.289240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.289256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.289533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.289924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.289942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.290204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.290467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.290484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.290812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.291148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.291165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.291422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.291755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.291773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.292033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.292285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.292301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.292571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.292823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.292840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.293024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.293335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.293352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.293690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.293968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.293986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.144 [2024-07-24 23:18:13.294328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.294592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.144 [2024-07-24 23:18:13.294610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.144 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.294802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.295135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.295152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.295425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.295758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.295774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.296112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.296467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.296483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.296758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.297043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.297059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.297342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.297700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.297728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.297939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.298251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.298268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.298552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.298888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.298906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.299081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.299329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.299346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.299610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.299941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.299959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.300336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.300592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.300609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.300969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.301308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.301325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.301611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.301906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.301924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.302242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.302529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.302546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.302818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.303078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.303096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.303408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.303747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.303765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.304009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.304291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.304308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.304647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.304942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.304960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.305226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.305559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.305576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.305857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.306120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.306137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.306431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.306766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.306784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.307146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.307459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.307475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.307663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.308004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.308022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.308298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.308580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.308597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.308860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.309133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.309151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.309517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.309853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.309872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.310209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.310460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.310477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.310747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.311063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.311083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.311430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.311687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.311705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.311980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.312340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.312357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.312705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.313053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.313071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.313433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.313697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.313722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.314018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.314384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.314402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.314599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.314874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.314892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.315202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.315444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.315461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.315685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.316044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.316061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.316320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.316660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.316677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.316977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.317268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.317288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.317643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.317919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.317936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.318265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.318551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.318568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.318925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.319238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.319255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.319596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.319952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.319969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.320302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.320589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.320606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.320928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.321195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.321212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.321587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.321846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.321864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.322193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.322526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.322543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.322808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.323014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.323031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.323300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.323673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.323690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.324061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.324394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.324411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.324722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.325073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.325090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.325451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.325786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.145 [2024-07-24 23:18:13.325814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.145 qpair failed and we were unable to recover it. 00:32:41.145 [2024-07-24 23:18:13.326175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.326527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.326544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.326898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.327247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.327264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.327593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.327855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.327872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.328188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.328477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.328494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.328850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.329184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.329201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.329502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.329839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.329857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.330111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.330408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.330425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.330676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.330925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.330943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.331238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.331516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.331532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.331789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.331989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.332007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.332240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.332487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.332504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.332871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.333214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.333233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.333599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.333800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.333818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.334132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.334388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.334405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.334696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.334986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.335004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.335319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.335580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.335596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.335926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.336169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.336186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.336493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.336741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.336758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.337096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.337301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.337318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.337651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.337835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.337851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.338043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.338378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.338394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.338734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.339014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.339031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.339346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.339603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.339620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.339913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.340198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.340215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.340495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.340814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.340854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.341243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.341566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.341606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.341993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.342250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.342291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.342679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.343077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.343118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.343390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.343794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.343834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.344242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.344621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.344660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.345034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.345341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.345380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.345766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.346141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.346180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.346592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.346945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.346986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.347368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.347672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.347713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.348056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.348309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.348348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.348673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.348972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.349012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.349328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.349681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.349752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.350145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.350526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.350572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.350961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.351274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.351312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.351687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.352014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.352054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.352348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.352713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.352764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.353023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.353281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.353320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.353683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.354073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.354112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.146 qpair failed and we were unable to recover it. 00:32:41.146 [2024-07-24 23:18:13.354318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.146 [2024-07-24 23:18:13.354620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.354659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.355048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.355350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.355389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.355648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.356018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.356059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.356378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.356689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.356741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.357055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.357424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.357463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.357741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.358074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.358113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.358348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.358582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.358621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.358851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.359091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.359130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.359445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.359757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.359797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.360089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.360466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.360505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.360827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.361015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.361031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.361217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.361474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.361513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.361751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.361980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.361997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.362340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.362746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.362787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.363176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.363477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.363516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.363781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.363979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.363996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.364326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.364635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.364675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.365106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.366471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.366507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.366804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.367000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.367040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.367366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.367621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.367638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.368007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.368300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.368339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.368653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.368968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.369008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.369305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.369625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.369665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.369967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.370222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.370258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.370614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.370920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.370937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.371116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.371457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.371497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.371778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.371975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.371992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.372273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.372534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.372579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.372972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.373294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.373333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.373536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.373790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.373830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.374156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.374448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.374501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.374765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.374945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.374962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.375166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.375412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.375451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.375749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.375954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.375971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.376242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.376569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.376607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.376898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.377118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.377158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.377468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.377687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.377754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.377995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.378319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.378358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.378671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.379016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.379057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.379278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.379667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.379706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.380051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.380422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.380461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.380712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.381132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.381172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.381475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.381807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.381849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.382182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.382505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.382545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.382838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.383136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.383175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.383539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.383821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.383868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.384181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.384480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.384520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.384824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.385015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.385055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.385410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.385685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.385702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.147 [2024-07-24 23:18:13.385948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.386149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.147 [2024-07-24 23:18:13.386190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.147 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.386502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.386675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.386693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.386973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.387178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.387194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.387482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.387748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.387766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.388027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.388401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.388418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.388804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.389097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.389137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.389366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.389735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.389776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.390035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.390280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.390319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.390651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.391019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.391037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.391150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.391482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.391500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.391821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.392075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.392110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.392385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.392759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.392799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.393127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.393369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.393408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.393652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.393901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.393941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.394161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.394488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.394505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.394693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.394956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.394972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.395321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.395658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.395675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.395973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.396178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.396195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.396406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.396681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.396698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.396873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.397155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.397171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.397349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.397534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.397550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.397749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.397995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.398012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.398318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.398489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.398505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.398784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.399038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.399055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.399371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.399558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.399574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.399747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.400042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.400058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.400323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.400525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.400542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.400875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.401124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.401141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.401331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.401647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.401664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.401876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.402128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.402144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.402402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.402645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.402684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.403117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.403419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.403463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.403703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.403983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.404000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.404215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.404549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.404566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.404742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.404975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.404992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.405233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.405390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.405406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.405579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.405853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.405870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.406051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.406298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.406314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.406555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.406763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.406780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.407047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.407281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.407297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.407482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.407689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.407706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.408042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.408323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.408340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.408588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.408845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.408861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.409098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.409292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.409308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.409485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.409689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.409706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.409950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.410182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.410199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.410391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.410627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.410643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.410888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.411122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.411140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.411397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.411668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.411685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.411956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.412197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.148 [2024-07-24 23:18:13.412213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.148 qpair failed and we were unable to recover it. 00:32:41.148 [2024-07-24 23:18:13.412377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.412560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.412576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.412839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.413072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.413088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.413284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.413520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.413536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.413797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.413979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.413996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.414305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.414567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.414584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.414938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.415117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.415133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.415374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.415590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.415606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.415856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.416092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.416108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.416385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.416627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.416643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.416932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.417235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.417251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.417516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.417848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.417864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.418164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.418360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.418376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.418633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.418816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.418833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.419166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.419504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.419521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.419770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.420097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.420114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.420312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.420555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.420571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.420750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.421076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.421092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.421326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.421491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.421507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.421766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.422012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.422028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.422331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.422558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.422574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.422830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.423074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.423090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.423325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.423570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.423586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.423756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.424010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.424027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.424263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.424590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.424606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.424873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.425135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.425152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.425405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.425631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.425647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.425951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.426126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.426142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.426393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.426739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.426755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.427066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.427389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.427405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.427644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.427880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.427897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.428148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.428425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.428441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.428687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.428933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.428950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.429181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.429454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.429470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.429726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.430072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.430088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.430286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.430471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.430486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.430724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.430958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.430974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.431156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.431342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.431358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.431692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.431943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.431959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.432280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.432468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.432484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.432741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.433044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.433060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.433363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.433474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.433490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.433771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.433948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.433964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.434148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.434394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.434410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.434742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.434998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.435015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.435251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.435505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.149 [2024-07-24 23:18:13.435521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.149 qpair failed and we were unable to recover it. 00:32:41.149 [2024-07-24 23:18:13.435832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.436063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.436080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.436329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.436627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.436643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.436916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.437157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.437179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.437358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.437587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.437606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.437861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.438095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.438111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.438365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.438538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.438554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.438788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.439130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.439147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.439326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.439584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.439600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.439721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.439992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.440008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.440197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.440469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.440485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.440785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.441028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.441045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.441377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.441743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.441783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.442085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.442263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.442279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.442455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.442642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.442661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.442910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.443157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.443173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.443424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.443691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.443707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.443895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.444129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.444146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.444325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.444566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.444582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.444899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.445189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.445205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.445536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.445796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.445812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.446064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.446312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.446328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.446570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.446865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.446881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.447157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.447478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.447494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.447801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.448037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.448053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.448227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.448398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.448414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.448700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.448867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.448883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.449056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.449353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.449369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.449616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.449912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.449929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.450204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.450381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.450397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.450719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.451006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.451022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.451322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.451560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.451576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.451880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.452115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.452131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.452377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.452623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.452639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.452870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.453132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.453148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.453334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.453589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.453605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.453838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.454073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.454089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.454363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.454540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.454556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.454723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.455042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.455059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.455224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.455486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.455502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.455747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.456064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.456080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.456340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.456575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.456591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.456780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.457082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.457098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.457354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.457602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.457618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.457801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.457975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.457992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.458226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.458421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.458437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.458682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.458910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.458927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.459096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.459326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.459342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.459647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.459893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.459909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.460151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.460309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.460325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.150 [2024-07-24 23:18:13.460576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.460751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.150 [2024-07-24 23:18:13.460768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.150 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.461097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.461327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.461343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.461611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.461838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.461855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.462161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.462439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.462455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.462758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.463028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.463044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.463324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.463592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.463609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.463874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.464125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.464141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.464378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.464567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.464583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.464852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.465101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.465117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.465393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.465638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.465654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.465884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.466133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.466149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.466311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.466621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.466637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.466998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.467321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.467337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.467637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.467937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.467953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.468197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.468448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.468464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.468589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.468833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.468852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.469014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.469309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.469325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.469519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.469681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.469696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.469960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.470256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.470272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.470618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.470867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.470883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.471111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.471354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.471370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.471604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.471781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.471798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.472051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.472287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.472304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.472558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.472802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.472819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.473066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.473228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.473244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.473417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.473726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.473743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.473973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.474072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.474088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.474326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.474551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.474567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.474866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.475204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.475220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.475470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.475727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.475744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.476065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.476316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.476332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.476632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.476888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.476904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.477136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.477377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.477393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.477572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.477867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.477883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.478130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.478355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.478371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.478692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.479033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.479049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.479237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.479481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.479497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.479760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.480002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.480018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.480359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.480654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.480670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.480985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.481160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.481177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.481342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.481583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.481599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.481841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.482016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.482032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.482302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.482601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.482617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.482936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.483095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.483111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.483434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.483590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.483606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.483772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.484023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.484039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.484359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.484584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.484599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.484844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.485071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.485088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.485437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.485693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.485709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.151 qpair failed and we were unable to recover it. 00:32:41.151 [2024-07-24 23:18:13.485989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.486146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.151 [2024-07-24 23:18:13.486162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.486342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.486633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.486649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.486896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.487212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.487228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.487555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.487821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.487837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.488141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.488317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.488333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.488585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.488879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.488895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.489141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.489382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.489397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.489595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.489881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.489922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.490301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.490724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.490765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.490991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.491207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.491248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.491525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.491711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.491762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.492097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.492407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.492446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.492747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.493117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.493157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.493474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.493683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.493735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.494045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.494217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.494233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.494564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.494890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.494930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.495263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.495478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.495518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.495904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.496174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.496219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.496593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.496938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.496978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.497299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.497597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.497635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.497949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.498237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.498253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.498573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.498859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.498898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.499129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.499394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.499424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.499729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.499932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.499957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.500238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.500442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.500461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.500650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.500824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.500841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.501095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.501332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.501348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.501633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.501876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.501893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.502195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.502510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.502527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.502770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.503017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.503034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.503299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.503507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.503523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.503753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.503932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.503949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.504197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.504422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.504438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.504683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.504944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.504961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.505245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.505548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.505564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.505741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.505980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.505996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.506248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.506505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.506520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.506760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.507007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.507023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.507198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.507442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.507458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.507689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.507883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.507899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.508155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.508400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.508416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.508679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.509037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.509054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.509259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.509554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.509570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.509824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.510042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.510058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.510227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.510461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.510477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.152 qpair failed and we were unable to recover it. 00:32:41.152 [2024-07-24 23:18:13.510644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.152 [2024-07-24 23:18:13.510945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.510962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.511215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.511444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.511460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.511712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.511890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.511906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.512140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.512296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.512312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.512569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.512797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.512814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.513047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.513272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.513288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.513537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.513782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.513799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.513999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.514176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.514192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.514385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.514626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.514641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.514839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.515001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.515017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.515339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.515517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.515533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.515778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.515953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.515968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.516240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.516410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.516425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.516669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.516868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.516885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.517132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.517373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.517388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.517634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.517861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.517877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.518226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.518401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.518417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.518658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.518763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.518780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.519093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.519346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.519362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.519684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.519942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.519958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.520216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.520463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.520479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.520656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.520879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.520895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.521060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.521287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.521302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.521575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.521865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.521884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.522128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.522380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.522396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.522633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.522874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.522891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.523215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.523533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.523549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.523798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.523954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.523971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.524215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.524528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.524544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.524844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.525088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.525104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.525347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.525665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.525681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.525856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.526102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.526117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.526370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.526622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.526638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.526818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.526997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.527018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.527339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.527633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.527648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.527901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.528195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.528211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.528464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.528575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.528591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.528778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.528943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.528959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.529238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.529506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.529522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.529770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.530121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.530137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.530435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.530752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.530768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.531025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.531270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.531286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.531538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.531838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.531854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.532096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.532273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.532289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.532536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.532693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.532709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.532967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.533205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.533221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.533529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.533713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.533743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.153 qpair failed and we were unable to recover it. 00:32:41.153 [2024-07-24 23:18:13.534068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.153 [2024-07-24 23:18:13.534310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.534326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.534640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.534868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.534885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.535203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.535461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.535477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.535727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.535971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.535987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.536309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.536467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.536483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.536781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.537065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.537081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.537328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.537572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.537588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.537910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.538205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.538222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.538461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.538686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.538702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.538985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.539210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.539226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.539475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.539712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.539734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.540056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.540291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.540307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.540611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.540855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.540872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.541137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.541316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.541332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.541652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.541840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.541857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.542105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.542333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.542349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.542646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.542984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.543001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.543233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.543463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.543480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.543725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.543907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.543923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.544224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.544473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.544490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.544739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.544991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.545007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.545238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.545484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.545500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.545771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.545998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.546014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.546261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.546504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.546520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.546777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.546938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.546954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.547223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.547453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.547469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.547789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.547985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.548002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.548249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.548415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.548431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.548749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.548973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.548989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.549169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.549347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.549363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.549616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.549923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.549939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.550185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.550480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.550496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.550839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.551010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.551026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.551208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.551504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.551520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.551771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.551948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.551964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.552121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.552436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.552452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.552702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.553002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.553018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.553296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.553611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.553630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.553824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.554156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.554171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.554451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.554698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.554727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.554989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.555219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.555235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.555485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.555796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.555812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.556066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.556327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.556342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.556584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.556898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.556914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.557233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.557555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.557571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.557826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.558145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.558161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.558460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.558702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.558723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.558919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.559177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.559193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.559460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.559802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.559818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.154 qpair failed and we were unable to recover it. 00:32:41.154 [2024-07-24 23:18:13.560070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.154 [2024-07-24 23:18:13.560231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.560247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.155 qpair failed and we were unable to recover it. 00:32:41.155 [2024-07-24 23:18:13.560569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.560765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.560782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.155 qpair failed and we were unable to recover it. 00:32:41.155 [2024-07-24 23:18:13.561096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.561324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.561340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.155 qpair failed and we were unable to recover it. 00:32:41.155 [2024-07-24 23:18:13.561568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.561879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.561895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.155 qpair failed and we were unable to recover it. 00:32:41.155 [2024-07-24 23:18:13.562140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.562334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.562350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.155 qpair failed and we were unable to recover it. 00:32:41.155 [2024-07-24 23:18:13.562603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.562827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.562843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.155 qpair failed and we were unable to recover it. 00:32:41.155 [2024-07-24 23:18:13.563079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.563330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.563346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.155 qpair failed and we were unable to recover it. 00:32:41.155 [2024-07-24 23:18:13.563594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.563850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.563867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.155 qpair failed and we were unable to recover it. 00:32:41.155 [2024-07-24 23:18:13.564134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.564464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.564480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.155 qpair failed and we were unable to recover it. 00:32:41.155 [2024-07-24 23:18:13.564719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.565062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.565078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.155 qpair failed and we were unable to recover it. 00:32:41.155 [2024-07-24 23:18:13.565377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.565691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.565706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.155 qpair failed and we were unable to recover it. 00:32:41.155 [2024-07-24 23:18:13.566057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.566247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.566263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.155 qpair failed and we were unable to recover it. 00:32:41.155 [2024-07-24 23:18:13.566546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.566788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.566805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.155 qpair failed and we were unable to recover it. 00:32:41.155 [2024-07-24 23:18:13.567105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.567350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.155 [2024-07-24 23:18:13.567366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.155 qpair failed and we were unable to recover it. 00:32:41.155 [2024-07-24 23:18:13.567598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.422 [2024-07-24 23:18:13.567776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.422 [2024-07-24 23:18:13.567792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.422 qpair failed and we were unable to recover it. 00:32:41.422 [2024-07-24 23:18:13.568031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.422 [2024-07-24 23:18:13.568356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.422 [2024-07-24 23:18:13.568371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.422 qpair failed and we were unable to recover it. 00:32:41.422 [2024-07-24 23:18:13.568724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.422 [2024-07-24 23:18:13.568986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.422 [2024-07-24 23:18:13.569003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.422 qpair failed and we were unable to recover it. 00:32:41.422 [2024-07-24 23:18:13.569323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.422 [2024-07-24 23:18:13.569501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.422 [2024-07-24 23:18:13.569517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.422 qpair failed and we were unable to recover it. 00:32:41.422 [2024-07-24 23:18:13.569678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.422 [2024-07-24 23:18:13.569977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.422 [2024-07-24 23:18:13.569994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.422 qpair failed and we were unable to recover it. 00:32:41.422 [2024-07-24 23:18:13.570297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.422 [2024-07-24 23:18:13.570542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.422 [2024-07-24 23:18:13.570558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.422 qpair failed and we were unable to recover it. 00:32:41.422 [2024-07-24 23:18:13.570750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.422 [2024-07-24 23:18:13.571051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.422 [2024-07-24 23:18:13.571067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.422 qpair failed and we were unable to recover it. 00:32:41.422 [2024-07-24 23:18:13.571297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.422 [2024-07-24 23:18:13.571553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.422 [2024-07-24 23:18:13.571569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.422 qpair failed and we were unable to recover it. 00:32:41.422 [2024-07-24 23:18:13.571817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.572053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.572069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.572265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.572561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.572577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.572827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.573088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.573104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.573269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.573574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.573590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.573918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.574237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.574253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.574577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.574935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.574951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.575185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.575504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.575520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.575704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.576026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.576042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.576338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.576514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.576530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.576824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.577071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.577087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.577333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.577574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.577590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.577844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.578108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.578124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.578439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.578757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.578774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.579121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.579420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.579436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.579781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.580046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.580062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.580226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.580400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.580416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.580670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.580907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.580924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.581246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.581472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.581490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.581767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.582007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.582023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.582325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.582554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.582570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.582895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.583135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.583151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.583396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.583665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.583682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.584038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.584218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.584234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.584558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.584797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.584813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.585119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.585346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.585362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.585711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.586017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.586033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.586352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.586677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.586693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.586943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.587142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.423 [2024-07-24 23:18:13.587157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.423 qpair failed and we were unable to recover it. 00:32:41.423 [2024-07-24 23:18:13.587486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.587673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.587689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.587946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.588274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.588290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.588484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.588733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.588750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.589031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.589354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.589370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.589604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.589919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.589936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.590261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.590574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.590590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.590844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.591069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.591085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.591199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.591514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.591530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.591763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.592030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.592046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.592347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.592524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.592540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.592871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.593028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.593044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.593275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.593578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.593594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.593898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.594139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.594155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.594475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.594707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.594735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.594901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.595162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.595178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.595432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.595746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.595763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.596088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.596280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.596296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.596463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.596781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.596797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.597112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.597430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.597446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.597694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.597950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.597966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.598225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.598475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.598491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.598727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.599039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.599055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.599324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.599565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.599581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.599814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.600005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.600021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.600254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.600507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.600523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.600844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.601139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.601156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.601421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.601593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.601609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.601860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.602176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.602192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.424 qpair failed and we were unable to recover it. 00:32:41.424 [2024-07-24 23:18:13.602438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.602664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.424 [2024-07-24 23:18:13.602680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.602983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.603243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.603259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.603559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.603807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.603823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.604087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.604250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.604265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.604507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.604682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.604698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.605003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.605316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.605331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.605629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.605932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.605949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.606135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.606322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.606338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.606609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.606862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.606878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.607040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.607282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.607298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.607621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.607863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.607880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.608180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.608357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.608373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.608621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.608908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.608928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.609231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.609408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.609424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.609676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.609855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.609871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.610121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.610361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.610377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.610571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.610888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.610904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.611158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.611343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.611359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.611680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.612004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.612020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.612321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.612492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.612508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.612705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.612972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.612988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.613161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.613360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.613376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.613699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.614020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.614037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.614362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.614535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.614551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.614912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.615251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.615267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.615463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.615708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-24 23:18:13.615730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.425 qpair failed and we were unable to recover it. 00:32:41.425 [2024-07-24 23:18:13.616058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.616282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.616298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.616540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.616772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.616789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.617020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.617280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.617296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.617597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.617824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.617840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.618092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.618333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.618349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.618592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.618854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.618870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.619173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.619435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.619451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.619755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.619986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.620001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.620246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.620521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.620537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.620838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.621133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.621149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.621402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.621727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.621743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.621994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.622237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.622253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.622567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.622742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.622758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.623058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.623326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.623342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.623607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.623947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.623964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.624159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.624435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.624451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.624629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.624900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.624916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.625177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.625338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.625354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.625549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.625719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.625736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.625918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.626164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.626180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.626371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.626619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.626634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.626801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.627097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.627113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.627456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.627694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.627711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.628024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.628328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.628344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.628531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.628846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.628862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.629050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.629304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.629320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.629513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.629768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.629784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.630019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.630258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.630274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.426 qpair failed and we were unable to recover it. 00:32:41.426 [2024-07-24 23:18:13.630621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-24 23:18:13.630862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.630879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.427 qpair failed and we were unable to recover it. 00:32:41.427 [2024-07-24 23:18:13.631199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.631437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.631453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.427 qpair failed and we were unable to recover it. 00:32:41.427 [2024-07-24 23:18:13.631647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.631888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.631904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.427 qpair failed and we were unable to recover it. 00:32:41.427 [2024-07-24 23:18:13.632222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.632542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.632558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.427 qpair failed and we were unable to recover it. 00:32:41.427 [2024-07-24 23:18:13.632862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.633157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.633173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.427 qpair failed and we were unable to recover it. 00:32:41.427 [2024-07-24 23:18:13.633492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.633809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.633826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.427 qpair failed and we were unable to recover it. 00:32:41.427 [2024-07-24 23:18:13.634092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.634332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.634348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.427 qpair failed and we were unable to recover it. 00:32:41.427 [2024-07-24 23:18:13.634530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.634822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.634838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.427 qpair failed and we were unable to recover it. 00:32:41.427 [2024-07-24 23:18:13.635141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.635416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.635431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.427 qpair failed and we were unable to recover it. 00:32:41.427 [2024-07-24 23:18:13.635696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.636042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.636061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.427 qpair failed and we were unable to recover it. 00:32:41.427 [2024-07-24 23:18:13.636305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.636530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.636546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.427 qpair failed and we were unable to recover it. 00:32:41.427 [2024-07-24 23:18:13.636799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.637071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.637087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.427 qpair failed and we were unable to recover it. 00:32:41.427 [2024-07-24 23:18:13.637348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.637636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.637652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.427 qpair failed and we were unable to recover it. 00:32:41.427 [2024-07-24 23:18:13.637976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.638224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.638240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.427 qpair failed and we were unable to recover it. 00:32:41.427 [2024-07-24 23:18:13.638494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.638669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.638685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.427 qpair failed and we were unable to recover it. 00:32:41.427 [2024-07-24 23:18:13.638929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.639157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.639173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.427 qpair failed and we were unable to recover it. 00:32:41.427 [2024-07-24 23:18:13.639413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.639670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.639686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.427 qpair failed and we were unable to recover it. 00:32:41.427 [2024-07-24 23:18:13.639962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.640136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.640151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.427 qpair failed and we were unable to recover it. 00:32:41.427 [2024-07-24 23:18:13.640419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.640657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.640673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.427 qpair failed and we were unable to recover it. 00:32:41.427 [2024-07-24 23:18:13.640995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.641257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.641276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.427 qpair failed and we were unable to recover it. 00:32:41.427 [2024-07-24 23:18:13.641579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.641887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-24 23:18:13.641904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.642204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.642499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.642514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.642764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.642954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.642970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.643273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.643470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.643486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.643667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.643863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.643879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.644151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.644417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.644433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.644677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.644855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.644871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.645172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.645335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.645352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.645727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.645967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.645983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.646180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.646472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.646488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.646664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.646890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.646907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.647251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.647583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.647599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.647843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.648066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.648082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.648349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.648594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.648610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.648798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.649039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.649055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.649377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.649570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.649586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.649770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.650099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.650115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.650394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.650592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.650608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.650909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.651095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.651111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.651326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.651623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.651638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.651872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.652120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.652136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.652389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.652685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.652700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.652992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.653161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.653176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.653432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.653677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.653693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.653901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.654190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.654210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.654519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.654845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.654861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.655035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.655231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.428 [2024-07-24 23:18:13.655247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.428 qpair failed and we were unable to recover it. 00:32:41.428 [2024-07-24 23:18:13.655445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.655692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.655708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.656009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.656238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.656254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.656557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.656732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.656748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.657006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.657255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.657271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.657529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.657775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.657792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.658045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.658304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.658320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.658490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.658671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.658687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.659047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.659298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.659313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.659497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.659676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.659692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.659948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.660177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.660193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.660423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.660664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.660680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.661009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.661207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.661223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.661470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.661657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.661673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.661860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.662120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.662136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.662334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.662447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.662463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.662642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.662947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.662963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.663262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.663465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.663481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.663664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.663896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.663912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.664161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.664402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.664418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.664652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.664877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.664894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.665091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.665268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.665285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.665515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.665807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.665823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.666123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.666437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.666452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.666753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.666993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.667011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.667242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.667486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.667502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.667801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.668030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.668046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.668284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.668512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.668527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.668825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.669005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.429 [2024-07-24 23:18:13.669021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.429 qpair failed and we were unable to recover it. 00:32:41.429 [2024-07-24 23:18:13.669324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.669560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.669576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.669830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.670138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.670154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.670479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.670707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.670728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.670995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.671238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.671254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.671453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.671699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.671718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.671991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.672172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.672188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.672433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.672596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.672612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.672915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.673168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.673184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.673373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.673557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.673573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.673815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.674042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.674058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.674306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.674635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.674675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.674994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.675317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.675356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.675725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.675907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.675923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.676161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.676399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.676415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.676594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.676827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.676844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.677089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.677389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.677405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.677660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.677831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.677847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.678153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.678431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.678447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.678771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.679039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.679082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.679438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.679721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.679738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.679951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.680114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.680130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.680431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.680607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.680622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.680810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.681063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.681078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.681376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.681559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.681575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.681764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.682013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.682029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.430 [2024-07-24 23:18:13.682328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.682579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.430 [2024-07-24 23:18:13.682618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.430 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.683001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.683234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.683273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.683576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.683835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.683851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.684107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.684407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.684446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.684807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.685088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.685127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.685411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.685698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.685762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.686066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.686346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.686361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.686682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.686863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.686879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.687121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.687408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.687446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.687739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.687995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.688034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.688315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.688674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.688690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.688869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.689103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.689119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.689307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.689546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.689561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.689806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.689968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.689984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.690245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.690602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.690617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.690867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.691181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.691220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.691572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.691848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.691864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.692193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.692513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.692528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.692776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.693072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.693088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.693356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.693671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.693687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.694035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.694334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.694373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.694682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.694905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.694945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.695192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.695493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.695509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.695832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.696059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.696075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.696305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.696542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.696558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.696820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.697071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.697110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.697404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.697673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.697689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.697933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.698116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.698132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.431 [2024-07-24 23:18:13.698373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.698723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.431 [2024-07-24 23:18:13.698739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.431 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.698936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.699192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.699207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.699466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.699765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.699781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.700083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.700379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.700395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.700698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.701019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.701035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.701342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.701644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.701683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.702030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.702242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.702280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.702624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.702867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.702883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.703198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.703438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.703454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.703726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.704097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.704112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.704452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.704817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.704857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.705113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.705399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.705415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.705662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.705952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.705968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.706230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.706423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.706440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.706739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.707066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.707082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.707322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.707557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.707596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.707830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.708024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.708040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.708239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.708412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.708451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.708828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.709040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.709080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.709448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.709610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.709626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.709818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.710074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.710090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.710333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.710582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.710598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.710844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.711077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.711093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.711298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.711553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.711569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.432 qpair failed and we were unable to recover it. 00:32:41.432 [2024-07-24 23:18:13.711747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.432 [2024-07-24 23:18:13.711958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.711998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.712348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.712635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.712674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.713056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.713289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.713328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.713623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.713851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.713867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.714106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.714335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.714351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.714650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.714967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.714984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.715246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.715438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.715453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.715700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.715954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.715970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.716170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.716463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.716478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.716720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.716951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.716967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.717197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.717448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.717464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.717813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.718108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.718124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.718376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.718693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.718709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.719020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.719259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.719275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.719555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.719851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.719867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.720064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.720369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.720385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.720617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.720866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.720882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.721151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.721383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.721399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.721696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.721931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.721948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.722233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.722424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.722439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.722757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.723000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.723016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.723313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.723550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.723566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.723686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.723937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.723953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.724290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.724460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.724476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.724723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.724897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.724913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.725240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.725555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.725570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.725868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.726107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.726123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.726421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.726583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.726599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.433 qpair failed and we were unable to recover it. 00:32:41.433 [2024-07-24 23:18:13.726827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.433 [2024-07-24 23:18:13.727079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.727095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.727367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.727641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.727657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.727886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.728179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.728195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.728379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.728625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.728641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.728977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.729259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.729297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.729704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.729988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.730027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.730371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.730650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.730689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.730923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.731196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.731235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.731468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.731772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.731812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.732122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.732278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.732316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.732614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.733000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.733040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.733392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.733695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.733713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.734064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.734343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.734382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.734794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.735112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.735151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.735524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.735897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.735937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.736155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.736495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.736534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.736824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.737188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.737227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.737471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.737695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.737741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.738051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.738339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.738378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.738695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.739044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.739083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.739311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.739583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.739622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.739955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.740241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.740286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.740642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.741000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.741035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.741383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.741745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.741785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.742155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.742538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.742576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.742813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.743091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.743130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.743480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.743790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.743806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.744048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.744365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.744381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.434 qpair failed and we were unable to recover it. 00:32:41.434 [2024-07-24 23:18:13.744677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.434 [2024-07-24 23:18:13.744951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.744991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.745319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.745658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.745697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.746001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.746217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.746256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.746628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.746835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.746880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.747175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.747474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.747512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.747734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.747985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.748024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.748266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.748573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.748611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.748836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.749201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.749239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.749557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.749843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.749883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.750184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.750526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.750565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.750844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.751116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.751155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.751455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.751798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.751838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.752195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.752544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.752582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.752979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.753273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.753324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.753608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.753943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.753960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.754214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.754459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.754491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.754784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.755124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.755163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.755553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.755768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.755785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.756026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.756351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.756390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.756763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.757111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.757150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.757470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.757755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.757794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.757984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.758211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.758227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.758491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.758832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.758871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.759142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.759388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.759404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.759731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.760080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.760119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.760434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.760653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.760691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.760981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.761282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.761320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.761508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.761784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.761823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.435 qpair failed and we were unable to recover it. 00:32:41.435 [2024-07-24 23:18:13.762109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.435 [2024-07-24 23:18:13.762472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.762511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.762809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.763151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.763190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.763497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.763789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.763829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.764180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.764547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.764586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.764815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.765186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.765225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.765557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.765700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.765753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.766073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.766417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.766456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.766702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.766936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.766975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.767356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.767638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.767677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.768101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.768441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.768480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.768829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.769179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.769217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.769505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.769786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.769825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.770118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.770477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.770516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.770800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.771047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.771087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.771409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.771612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.771651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.771934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.772262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.772301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.772611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.772951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.772991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.773377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.773645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.773662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.773856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.774182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.774221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.774608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.774946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.774986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.775395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.775639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.775678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.776017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.776246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.776285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.776661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.777043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.777083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.777462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.777805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.777845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.778149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.778366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.778405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.778779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.779145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.779184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.779544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.779910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.779949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.780334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.780573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.780611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.436 qpair failed and we were unable to recover it. 00:32:41.436 [2024-07-24 23:18:13.780937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.781221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.436 [2024-07-24 23:18:13.781259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.781561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.781850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.781891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.782235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.782508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.782547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.782957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.783200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.783239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.783556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.783792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.783832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.784116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.784324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.784363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.784683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.784956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.784973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.785138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.785443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.785482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.785762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.785972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.785988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.786292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.786567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.786607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.786940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.787104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.787120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.787385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.787620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.787660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.787924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.788217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.788256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.788614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.788927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.788967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.789314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.789681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.789729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.789978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.790253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.790292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.790609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.790879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.790919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.791217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.791505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.791544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.791851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.792076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.792115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.792470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.792697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.792745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.793096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.793309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.793348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.793748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.794093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.794132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.794511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.794893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.794909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.795192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.795478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.437 [2024-07-24 23:18:13.795516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.437 qpair failed and we were unable to recover it. 00:32:41.437 [2024-07-24 23:18:13.795897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.796055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.796094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.796470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.796759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.796811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.797054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.797357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.797396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.797703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.798051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.798090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.798317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.798683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.798731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.799061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.799293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.799331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.799612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.799932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.799972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.800331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.800670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.800709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.801099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.801378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.801416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.801789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.802075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.802113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.802470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.802828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.802867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.803192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.803349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.803388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.803631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.803931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.803948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.804190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.804535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.804573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.804874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.805039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.805055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.805325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.805572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.805589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.805844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.806089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.806123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.806363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.806655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.806694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.806943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.807219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.807258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.807634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.807971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.808001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.808255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.808528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.808567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.808832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.809136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.809175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.809551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.809839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.809855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.810086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.810409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.810430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.810705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.810951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.810990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.811282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.811645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.811684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.812070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.812389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.812428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.812758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.813113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.813129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.438 qpair failed and we were unable to recover it. 00:32:41.438 [2024-07-24 23:18:13.813378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.438 [2024-07-24 23:18:13.813548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.813564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.813834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.814125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.814164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.814445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.814733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.814772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.815057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.815397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.815436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.815796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.816012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.816051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.816374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.816674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.816712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.816901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.817085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.817120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.817351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.817643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.817684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.817956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.818275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.818313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.818728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.818955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.818994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.819314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.819601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.819640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.819887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.820250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.820288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.820598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.820994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.821034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.821340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.821562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.821600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.821829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.822121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.822160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.822536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.822898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.822937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.823263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.823537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.823577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.823775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.823949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.823988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.824290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.824560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.824599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.824938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.825223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.825263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.825560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.825897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.825914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.826241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.826534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.826581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.826827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.827074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.827089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.827332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.827630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.827668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.828033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.828440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.828478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.828737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.829108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.829149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.829494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.829665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.829711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.829979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.830273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.830311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.439 qpair failed and we were unable to recover it. 00:32:41.439 [2024-07-24 23:18:13.830683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.439 [2024-07-24 23:18:13.830992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.831031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.440 qpair failed and we were unable to recover it. 00:32:41.440 [2024-07-24 23:18:13.831401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.831679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.831699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.440 qpair failed and we were unable to recover it. 00:32:41.440 [2024-07-24 23:18:13.832033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.832318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.832356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.440 qpair failed and we were unable to recover it. 00:32:41.440 [2024-07-24 23:18:13.832654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.833006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.833045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.440 qpair failed and we were unable to recover it. 00:32:41.440 [2024-07-24 23:18:13.833327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.833669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.833708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.440 qpair failed and we were unable to recover it. 00:32:41.440 [2024-07-24 23:18:13.834014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.834205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.834221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.440 qpair failed and we were unable to recover it. 00:32:41.440 [2024-07-24 23:18:13.834580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.834942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.834982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.440 qpair failed and we were unable to recover it. 00:32:41.440 [2024-07-24 23:18:13.835360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.835643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.835682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.440 qpair failed and we were unable to recover it. 00:32:41.440 [2024-07-24 23:18:13.835986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.836350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.836394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.440 qpair failed and we were unable to recover it. 00:32:41.440 [2024-07-24 23:18:13.836781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.837052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.837091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.440 qpair failed and we were unable to recover it. 00:32:41.440 [2024-07-24 23:18:13.837243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.837585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.837624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.440 qpair failed and we were unable to recover it. 00:32:41.440 [2024-07-24 23:18:13.837953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.838318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.838358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.440 qpair failed and we were unable to recover it. 00:32:41.440 [2024-07-24 23:18:13.838752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.839060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.839076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.440 qpair failed and we were unable to recover it. 00:32:41.440 [2024-07-24 23:18:13.839402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.840257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.840289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.440 qpair failed and we were unable to recover it. 00:32:41.440 [2024-07-24 23:18:13.840510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.840756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.840773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.440 qpair failed and we were unable to recover it. 00:32:41.440 [2024-07-24 23:18:13.841008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.841252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.841269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.440 qpair failed and we were unable to recover it. 00:32:41.440 [2024-07-24 23:18:13.841571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.841869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.841908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.440 qpair failed and we were unable to recover it. 00:32:41.440 [2024-07-24 23:18:13.842238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.842525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.842563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.440 qpair failed and we were unable to recover it. 00:32:41.440 [2024-07-24 23:18:13.842930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.843204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.440 [2024-07-24 23:18:13.843223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.440 qpair failed and we were unable to recover it. 00:32:41.708 [2024-07-24 23:18:13.843550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.843736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.843759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.708 qpair failed and we were unable to recover it. 00:32:41.708 [2024-07-24 23:18:13.844056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.844294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.844309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.708 qpair failed and we were unable to recover it. 00:32:41.708 [2024-07-24 23:18:13.844634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.844961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.844978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.708 qpair failed and we were unable to recover it. 00:32:41.708 [2024-07-24 23:18:13.845150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.845327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.845343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.708 qpair failed and we were unable to recover it. 00:32:41.708 [2024-07-24 23:18:13.845575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.845819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.845835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.708 qpair failed and we were unable to recover it. 00:32:41.708 [2024-07-24 23:18:13.846154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.846448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.846486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.708 qpair failed and we were unable to recover it. 00:32:41.708 [2024-07-24 23:18:13.846839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.847152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.847192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.708 qpair failed and we were unable to recover it. 00:32:41.708 [2024-07-24 23:18:13.847472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.847663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.847678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.708 qpair failed and we were unable to recover it. 00:32:41.708 [2024-07-24 23:18:13.848028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.848384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.848423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.708 qpair failed and we were unable to recover it. 00:32:41.708 [2024-07-24 23:18:13.848708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.848950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.848995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.708 qpair failed and we were unable to recover it. 00:32:41.708 [2024-07-24 23:18:13.849320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.849478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.849516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.708 qpair failed and we were unable to recover it. 00:32:41.708 [2024-07-24 23:18:13.849802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.850116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.850155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.708 qpair failed and we were unable to recover it. 00:32:41.708 [2024-07-24 23:18:13.850436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.850731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.850772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.708 qpair failed and we were unable to recover it. 00:32:41.708 [2024-07-24 23:18:13.851087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.851317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.851355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.708 qpair failed and we were unable to recover it. 00:32:41.708 [2024-07-24 23:18:13.851662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.852053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.852093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.708 qpair failed and we were unable to recover it. 00:32:41.708 [2024-07-24 23:18:13.852338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.708 [2024-07-24 23:18:13.852614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.852653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.853020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.853303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.853342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.853574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.853845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.853895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.854083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.854313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.854351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.854661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.854977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.855023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.855306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.855583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.855622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.856000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.856290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.856329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.856636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.856912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.856951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.857236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.857491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.857507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.857696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.857863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.857904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.858159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.858465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.858504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.858751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.858949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.858988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.859262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.859580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.859618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.859977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.860259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.860275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.860578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.860866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.860906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.861233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.861505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.861543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.861894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.862128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.862144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.862441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.862700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.862765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.863069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.863433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.863471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.863766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.863977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.864016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.864410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.864699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.864749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.864993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.865266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.865304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.865656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.866032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.866072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.866421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.866784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.866823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.867049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.867339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.867377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.867689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.867935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.867975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.868352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.868587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.868626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.869024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.869380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.869420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.709 qpair failed and we were unable to recover it. 00:32:41.709 [2024-07-24 23:18:13.869728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.870020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.709 [2024-07-24 23:18:13.870058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.870356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.870534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.870550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.870853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.871125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.871164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.871485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.871797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.871837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.872176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.872537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.872576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.872944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.873239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.873255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.873509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.873760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.873777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.873985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.874152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.874167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.874361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.874667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.874706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.875139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.875316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.875355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.875651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.875935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.875975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.876357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.876630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.876669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.877038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.877310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.877348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.877655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.878009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.878049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.878342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.878643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.878682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.879044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.879339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.879355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.879531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.879784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.879824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.880117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.880399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.880415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.880668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.880851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.880867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.881196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.881477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.881516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.881763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.882036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.882074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.882461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.882752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.882793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.883170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.883538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.883577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.883919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.884094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.884133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.884512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.884876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.884916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.885312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.885527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.885565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.885917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.886121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.710 [2024-07-24 23:18:13.886137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.710 qpair failed and we were unable to recover it. 00:32:41.710 [2024-07-24 23:18:13.886447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.886748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.886787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.887065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.887305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.887321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.887641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.887813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.887830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.888006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.888172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.888188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.888443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.888704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.888768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.889021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.889336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.889376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.889739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.890099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.890118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.890390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.890647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.890666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.890966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.891234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.891251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.891476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.891697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.891719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.891959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.892284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.892325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.892606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.892986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.893005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.893182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.893435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.893474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.893649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.893796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.893853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.894141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.894379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.894395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.894632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.894895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.894912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.895014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.895187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.895203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.895382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.895560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.895578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.895895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.896070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.896108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.896421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.896778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.896824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.897232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.897583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.897599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.897838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.898153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.898195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.898494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.898783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.898823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.899122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.899346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.899362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.899630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.899794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.899810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.900101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.900427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.900466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.900784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.901002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.901050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.901400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.901697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.901713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.711 qpair failed and we were unable to recover it. 00:32:41.711 [2024-07-24 23:18:13.902017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.902275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.711 [2024-07-24 23:18:13.902314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.902672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.903011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.903052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.903355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.903650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.903688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.903944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.904300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.904316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.904482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.904752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.904792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.905118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.905383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.905399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.905726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.905894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.905910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.906138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.906365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.906380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.906695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.906998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.907038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.907387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.907603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.907642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.907966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.908235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.908251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.908598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.908865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.908905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.909155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.909531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.909569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.909876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.910168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.910185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.910561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.910799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.910815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.911044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.911306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.911322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.911574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.911676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.911692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.911874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.912098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.912114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.912422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.912604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.912619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.912872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.913140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.913156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.913390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.913513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.913551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.913878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.914174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.914216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.914519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.914734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.914774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.915166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.915397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.915414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.915658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.915972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.915989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.916225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.916475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.916492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.712 qpair failed and we were unable to recover it. 00:32:41.712 [2024-07-24 23:18:13.916668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.916902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.712 [2024-07-24 23:18:13.916943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.713 qpair failed and we were unable to recover it. 00:32:41.713 [2024-07-24 23:18:13.917318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.917595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.917635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.713 qpair failed and we were unable to recover it. 00:32:41.713 [2024-07-24 23:18:13.918032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.918343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.918358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.713 qpair failed and we were unable to recover it. 00:32:41.713 [2024-07-24 23:18:13.918638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.918883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.918899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.713 qpair failed and we were unable to recover it. 00:32:41.713 [2024-07-24 23:18:13.919143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.919322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.919337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.713 qpair failed and we were unable to recover it. 00:32:41.713 [2024-07-24 23:18:13.919528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.919794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.919811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.713 qpair failed and we were unable to recover it. 00:32:41.713 [2024-07-24 23:18:13.920127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.920496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.920536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.713 qpair failed and we were unable to recover it. 00:32:41.713 [2024-07-24 23:18:13.920913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.921216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.921256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.713 qpair failed and we were unable to recover it. 00:32:41.713 [2024-07-24 23:18:13.921628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.921920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.921952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.713 qpair failed and we were unable to recover it. 00:32:41.713 [2024-07-24 23:18:13.922203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.922442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.922460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.713 qpair failed and we were unable to recover it. 00:32:41.713 [2024-07-24 23:18:13.922692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.922923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.922939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.713 qpair failed and we were unable to recover it. 00:32:41.713 [2024-07-24 23:18:13.923119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.923418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.923456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.713 qpair failed and we were unable to recover it. 00:32:41.713 [2024-07-24 23:18:13.923744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.924063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.924103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.713 qpair failed and we were unable to recover it. 00:32:41.713 [2024-07-24 23:18:13.924435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.924773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.924813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.713 qpair failed and we were unable to recover it. 00:32:41.713 [2024-07-24 23:18:13.925136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.925392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.925408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.713 qpair failed and we were unable to recover it. 00:32:41.713 [2024-07-24 23:18:13.925673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.925989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.926031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.713 qpair failed and we were unable to recover it. 00:32:41.713 [2024-07-24 23:18:13.926338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.926682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.926754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.713 qpair failed and we were unable to recover it. 00:32:41.713 [2024-07-24 23:18:13.927128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.927420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.713 [2024-07-24 23:18:13.927436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.927735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.928051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.928089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.928441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.928742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.928785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.929083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.929314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.929330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.929582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.929824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.929840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.930081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.930332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.930348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.930664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.930843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.930864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.931043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.931226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.931243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.931566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.931747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.931763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.932023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.932197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.932216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.932528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.932789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.932805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.932996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.933167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.933184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.933383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.933540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.933556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.933814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.934037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.934053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.934284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.934618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.934659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.934924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.935227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.935268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.935556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.935843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.935882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.936133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.936422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.936463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.936696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.936934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.936989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.937277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.937511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.937530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.937778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.938018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.938035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.938284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.938529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.938545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.938851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.939100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.939116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.939435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.939733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.939781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.940159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.940353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.940369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.940614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.940878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.940894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.941136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.941380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.941419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.941789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.942060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.942076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.714 qpair failed and we were unable to recover it. 00:32:41.714 [2024-07-24 23:18:13.942336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.942499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.714 [2024-07-24 23:18:13.942515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.942828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.943018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.943037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.943280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.943468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.943484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.943808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.944138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.944177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.944483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.944845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.944884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.945135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.945450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.945465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.945734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.945999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.946015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.946192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.946424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.946463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.946768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.947124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.947140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.947372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.947604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.947620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.947870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.948111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.948127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.948316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.948596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.948634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.948907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.949215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.949232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.949541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.949711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.949733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.950049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.950279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.950295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.950612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.950912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.950953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.951268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.951509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.951525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.951833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.952105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.952121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.952380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.952543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.952558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.952877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.953123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.953140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.953390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.953735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.953778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.954157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.954405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.954421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.954758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.955056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.955096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.955465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.955742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.955782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.956103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.956377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.956415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.956799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.957037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.957075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.957342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.957726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.957766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.958114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.958341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.958357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.958583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.958824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.715 [2024-07-24 23:18:13.958840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.715 qpair failed and we were unable to recover it. 00:32:41.715 [2024-07-24 23:18:13.959070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.959245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.959261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.959592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.959820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.959837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.960078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.960382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.960421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.960646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.961003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.961043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.961277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.961641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.961680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.961996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.962282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.962321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.962615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.962906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.962945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.963235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.963527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.963566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.963879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.964171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.964209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.964509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.964783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.964822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.965134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.965402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.965441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.965756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.966041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.966080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.966360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.966514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.966553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.966941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.967217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.967256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.967540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.967812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.967853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.968202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.968541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.968580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.968871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.969096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.969134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.969454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.969793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.969831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.970180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.970530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.970569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.970872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.971236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.971274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.971648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.972020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.972059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.972384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.972676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.972726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.973010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.973351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.973389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.973626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.973956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.973996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.974214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.974464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.974503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.974791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.975135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.975174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.716 qpair failed and we were unable to recover it. 00:32:41.716 [2024-07-24 23:18:13.975552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.975842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.716 [2024-07-24 23:18:13.975882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.976239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.976628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.976666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.977028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.977372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.977411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.977794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.978084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.978123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.978469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.978816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.978855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.979222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.979521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.979537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.979767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.980007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.980022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.980342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.980523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.980538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.980842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.981124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.981154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.981464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.981747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.981787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.982106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.982391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.982430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.982662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.983050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.983089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.983441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.983737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.983776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.984137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.984346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.984385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.984702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.984937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.984976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.985328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.985618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.985657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.986049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.986414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.986452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.986744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.987087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.987126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.987446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.987785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.987824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.988120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.988362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.988401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.988682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.988977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.989017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.989239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.989600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.989639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.989891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.990181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.990220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.990578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.990942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.990982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.991284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.991558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.991596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.991976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.992332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.992371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.992686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.992923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.992962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.993250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.993565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.993604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.993920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.994147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.717 [2024-07-24 23:18:13.994186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.717 qpair failed and we were unable to recover it. 00:32:41.717 [2024-07-24 23:18:13.994540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:13.994763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:13.994803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:13.995126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:13.995481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:13.995497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:13.995740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:13.995965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:13.995980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:13.996292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:13.996665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:13.996704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:13.996950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:13.997338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:13.997376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:13.997745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:13.998102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:13.998141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:13.998388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:13.998706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:13.998772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:13.999149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:13.999489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:13.999528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:13.999851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.000136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.000175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:14.000529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.000902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.000941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:14.001243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.001480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.001519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:14.001889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.002189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.002205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:14.002414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.002726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.002765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:14.003067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.003404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.003443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:14.003805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.004090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.004129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:14.004364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.004662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.004701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:14.005016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.005291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.005333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:14.005659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.005985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.006024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:14.006371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.006743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.006783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:14.006962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.007325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.007364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:14.007730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.008072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.008111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:14.008483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.008777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.008817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:14.009124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.009494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.009532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:14.009921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.010194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.010232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.718 qpair failed and we were unable to recover it. 00:32:41.718 [2024-07-24 23:18:14.010626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.718 [2024-07-24 23:18:14.010987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.011027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.011387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.011703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.011749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.012101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.012395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.012434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.012680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.013002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.013043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.013277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.013664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.013703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.013991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.014289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.014327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.014703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.015072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.015111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.015452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.015841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.015880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.016120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.016443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.016482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.016733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.017098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.017136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.017382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.017612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.017650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.017963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.018242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.018258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.018570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.018890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.018907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.019092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.019326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.019365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.019739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.019988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.020027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.020421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.020769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.020809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.021129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.021420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.021436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.021739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.022013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.022052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.022320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.022542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.022558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.022815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.023164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.023203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.023595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.023812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.023852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.024145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.024366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.024405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.024704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.024952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.024968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.025294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.025505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.025544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.025947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.026297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.026341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.026743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.027107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.027146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.027449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.027750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.027790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.028013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.028376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.028414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.719 [2024-07-24 23:18:14.028817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.029106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.719 [2024-07-24 23:18:14.029144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.719 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.029438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.029803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.029842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.030159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.030520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.030536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.030779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.031035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.031079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.031379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.031756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.031796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.032098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.032461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.032500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.032722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.033024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.033069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.033420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.033701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.033728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.034051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.034299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.034314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.034596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.034871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.034911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.035289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.035604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.035643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.036016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.036305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.036354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.036625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.036974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.037014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.037321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.037543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.037581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.037957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.038250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.038288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.038590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.038959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.038999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.039294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.039560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.039604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.039984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.040232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.040271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.040566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.040907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.040947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.041177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.041562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.041600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.041965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.042334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.042373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.042737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.043118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.043157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.043474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.043786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.043827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.044044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.044337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.044382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.044683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.045001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.045041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.045284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.045570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.045608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.045899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.046256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.046276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.046590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.046859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.046898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.047064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.047427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.047465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.720 qpair failed and we were unable to recover it. 00:32:41.720 [2024-07-24 23:18:14.047822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.720 [2024-07-24 23:18:14.048189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.048228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.048643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.048916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.048955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.049203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.049568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.049607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.049930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.050226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.050265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.050662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.051016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.051055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.051402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.051746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.051786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.052088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.052454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.052492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.052870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.053235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.053274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.053580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.053976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.054015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.054364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.054691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.054740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.055047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.055332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.055370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.055599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.055962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.056002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.056300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.056595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.056634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.056931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.057210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.057226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.057457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.057744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.057784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.058085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.058452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.058491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.058662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.059032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.059071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.059421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.059790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.059831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.060147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.060493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.060532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.060861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.061179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.061218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.061568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.061906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.061945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.062177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.062448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.062487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.062786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.063154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.063169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.063360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.063599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.063615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.063799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.064113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.064129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.064481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.064666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.064705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.065102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.065392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.065431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.065778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.065999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.066037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.721 [2024-07-24 23:18:14.066333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.066612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.721 [2024-07-24 23:18:14.066651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.721 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.066955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.067296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.067334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.067582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.067945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.067984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.068283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.068598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.068637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.068884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.069099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.069138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.069507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.069815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.069854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.070171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.070510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.070548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.070868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.071212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.071252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.071403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.071755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.071796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.072147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.072428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.072467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.072823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.073039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.073077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.073307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.073534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.073575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.073807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.074050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.074066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.074306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.074493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.074509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.074684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.074875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.074914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.075282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.075646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.075685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.076004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.076364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.076402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.076731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.076962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.077000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.077320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.077519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.077535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.077835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.078139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.078178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.078418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.078768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.078784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.079095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.079319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.079359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.079735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.080040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.080078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.080478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.080843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.080882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.081264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.081552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.081568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.722 qpair failed and we were unable to recover it. 00:32:41.722 [2024-07-24 23:18:14.081862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.722 [2024-07-24 23:18:14.082228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.082266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.082566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.082818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.082834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.083147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.083458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.083497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.083868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.084231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.084271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.084588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.084928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.084968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.085209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.085372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.085410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.085723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.085882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.085897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.086224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.086581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.086620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.086835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.087158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.087197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.087418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.087697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.087713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.088053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.088270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.088309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.088604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.088839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.088878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.089204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.089542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.089580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.089927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.090293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.090331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.090567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.090792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.090832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.091153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.091373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.091412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.091694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.092012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.092051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.092367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.092656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.092695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.093082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.093395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.093434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.093751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.094130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.094169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.094534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.094675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.094713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.095121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.095484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.095523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.095826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.096112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.096151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.096434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.096746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.096785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.097109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.097468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.097507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.097834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.098138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.098176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.098430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.098750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.098791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.099159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.099542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.099581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.723 [2024-07-24 23:18:14.099897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.100250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.723 [2024-07-24 23:18:14.100288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.723 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.100661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.100906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.100946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.101207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.101508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.101524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.101776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.102057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.102095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.102389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.102539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.102577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.102858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.103171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.103187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.103446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.103766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.103806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.104204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.104507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.104546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.104892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.105160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.105199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.105574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.105867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.105907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.106087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.106431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.106470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.106849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.107198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.107236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.107537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.107824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.107863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.108159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.108536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.108575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.108877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.109271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.109309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.109619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.109983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.110023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.110313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.110520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.110559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.110803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.111083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.111099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.111428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.111751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.111791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.112142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.112488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.112528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.112798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.113135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.113175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.113405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.113613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.113652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.114027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.114354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.114393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.114766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.114997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.115035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.115382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.115748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.115788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.116057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.116299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.116315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.116696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.117009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.117048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.117327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.117554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.117594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.117964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.118329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.118368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.724 qpair failed and we were unable to recover it. 00:32:41.724 [2024-07-24 23:18:14.118589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.724 [2024-07-24 23:18:14.118912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.118952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.725 qpair failed and we were unable to recover it. 00:32:41.725 [2024-07-24 23:18:14.119252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.119468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.119484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.725 qpair failed and we were unable to recover it. 00:32:41.725 [2024-07-24 23:18:14.119734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.120009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.120047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.725 qpair failed and we were unable to recover it. 00:32:41.725 [2024-07-24 23:18:14.120397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.120684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.120729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.725 qpair failed and we were unable to recover it. 00:32:41.725 [2024-07-24 23:18:14.121103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.121323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.121363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.725 qpair failed and we were unable to recover it. 00:32:41.725 [2024-07-24 23:18:14.121736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.122079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.122118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.725 qpair failed and we were unable to recover it. 00:32:41.725 [2024-07-24 23:18:14.122493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.122783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.122822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.725 qpair failed and we were unable to recover it. 00:32:41.725 [2024-07-24 23:18:14.123068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.123431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.123471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.725 qpair failed and we were unable to recover it. 00:32:41.725 [2024-07-24 23:18:14.123757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.124008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.124047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.725 qpair failed and we were unable to recover it. 00:32:41.725 [2024-07-24 23:18:14.124353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.124713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.124733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.725 qpair failed and we were unable to recover it. 00:32:41.725 [2024-07-24 23:18:14.124981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.125239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.125255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.725 qpair failed and we were unable to recover it. 00:32:41.725 [2024-07-24 23:18:14.125507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.125778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.125817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.725 qpair failed and we were unable to recover it. 00:32:41.725 [2024-07-24 23:18:14.126118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.126392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.126438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.725 qpair failed and we were unable to recover it. 00:32:41.725 [2024-07-24 23:18:14.126694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.126993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.127009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.725 qpair failed and we were unable to recover it. 00:32:41.725 [2024-07-24 23:18:14.127314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.127675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.127726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.725 qpair failed and we were unable to recover it. 00:32:41.725 [2024-07-24 23:18:14.128006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.128348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.128365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.725 qpair failed and we were unable to recover it. 00:32:41.725 [2024-07-24 23:18:14.128593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.128766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.725 [2024-07-24 23:18:14.128782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.725 qpair failed and we were unable to recover it. 00:32:41.993 [2024-07-24 23:18:14.128902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.129130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.129146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.993 qpair failed and we were unable to recover it. 00:32:41.993 [2024-07-24 23:18:14.129409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.129707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.129730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.993 qpair failed and we were unable to recover it. 00:32:41.993 [2024-07-24 23:18:14.130076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.130423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.130439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.993 qpair failed and we were unable to recover it. 00:32:41.993 [2024-07-24 23:18:14.130722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.131048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.131063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.993 qpair failed and we were unable to recover it. 00:32:41.993 [2024-07-24 23:18:14.131334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.131606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.131645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.993 qpair failed and we were unable to recover it. 00:32:41.993 [2024-07-24 23:18:14.132027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.132267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.132306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.993 qpair failed and we were unable to recover it. 00:32:41.993 [2024-07-24 23:18:14.132658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.133013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.133053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.993 qpair failed and we were unable to recover it. 00:32:41.993 [2024-07-24 23:18:14.133423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.133724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.133764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.993 qpair failed and we were unable to recover it. 00:32:41.993 [2024-07-24 23:18:14.134166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.134487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.134526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.993 qpair failed and we were unable to recover it. 00:32:41.993 [2024-07-24 23:18:14.134898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.135175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.135214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.993 qpair failed and we were unable to recover it. 00:32:41.993 [2024-07-24 23:18:14.135510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.135846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.135862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.993 qpair failed and we were unable to recover it. 00:32:41.993 [2024-07-24 23:18:14.136192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.136430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.136476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.993 qpair failed and we were unable to recover it. 00:32:41.993 [2024-07-24 23:18:14.136771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.137104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.137143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.993 qpair failed and we were unable to recover it. 00:32:41.993 [2024-07-24 23:18:14.137448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.137617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.137633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.993 qpair failed and we were unable to recover it. 00:32:41.993 [2024-07-24 23:18:14.137954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.993 [2024-07-24 23:18:14.138181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.138197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.138376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.138622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.138638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.138902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.139061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.139076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.139271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.139556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.139595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.139897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.140110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.140148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.140375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.140640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.140656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.140850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.141085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.141100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.141420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.141717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.141738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.141923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.142169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.142185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.142503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.142768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.142784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.143113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.143506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.143544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.143807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.144050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.144066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.144364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.144544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.144560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.144814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.144989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.145004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.145328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.145572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.145588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.145842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.146019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.146035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.146382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.146610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.146626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.146830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.147163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.147208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.147498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.147740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.147756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.148012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.148237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.148253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.148484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.148736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.148751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.149093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.149282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.149298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.149555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.149829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.149868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.150212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.150388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.150404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.150585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.150830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.150847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.151167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.151442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.151457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.151656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.151983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.152001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.152322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.152546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.152562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-24 23:18:14.152813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.994 [2024-07-24 23:18:14.153131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.153147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.153381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.153705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.153725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.153924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.154163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.154178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.154435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.154744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.154784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.155195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.155485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.155526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.155756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.156131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.156148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.156419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.156726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.156767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.157052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.157331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.157387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.157762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.158064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.158080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.158280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.158460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.158501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.158764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.159142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.159182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.159472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.159729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.159747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.159995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.160240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.160257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.160504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.160799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.160816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.161146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.161465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.161480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.161729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.162027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.162044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.162278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.162527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.162543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.162865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.163103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.163119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.163323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.163492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.163507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.163811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.164144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.164184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.164491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.164709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.164730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.164962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.165231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.165272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.165534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.165767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.165809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.165988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.166328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.166344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.166594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.166786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.166802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.167076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.167300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.167317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.167558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.167792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.167808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.168117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.168292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.168308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-24 23:18:14.168575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.168753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.995 [2024-07-24 23:18:14.168809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.169052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.169361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.169400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.169720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.170017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.170034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.170212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.170442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.170461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.170760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.170933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.170949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.171247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.171409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.171425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.171671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.171844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.171861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.172192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.172421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.172460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.172748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.173091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.173130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.173426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.173705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.173724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.173971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.174217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.174233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.174531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.174825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.174841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.175086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.175357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.175372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.175696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.176081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.176120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.176441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.176658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.176674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.176928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.177152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.177192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.177568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.177756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.177772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.178020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.178266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.178282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.178458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.178650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.178665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.178902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.179247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.179285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.179512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.179815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.179849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.180016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.180284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.180300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.180627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.180920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.180937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.181250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.181590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.181628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.181918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.182154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.182192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.182416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.182704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.182767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.183095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.183385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.183423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.183792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.183989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.184005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.184326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.184482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.996 [2024-07-24 23:18:14.184520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.996 qpair failed and we were unable to recover it. 00:32:41.996 [2024-07-24 23:18:14.184730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.184912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.184928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.185161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.185390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.185406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.185658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.185960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.185999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.186067] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23239f0 (9): Bad file descriptor 00:32:41.997 [2024-07-24 23:18:14.186595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.186813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.186834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.187173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.187492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.187509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.187829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.187992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.188008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.188259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.188481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.188520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.188855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.189142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.189181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.189490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.189860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.189903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.190147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.190261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.190277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.190534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.190847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.190888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.191208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.191511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.191527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.191761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.191952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.191968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.192301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.192577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.192616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.192925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.193268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.193307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.193680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.194045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.194061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.194326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.194666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.194723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.194959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.195148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.195164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.195466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.195806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.195847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.196148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.196409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.196425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.196668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.196934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.196950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.197184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.197498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.197514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.197775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.197973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.197989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.198245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.198403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.198419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.198680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.198917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.198957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.199249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.199525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.199565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.199876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.200145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.200161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.997 [2024-07-24 23:18:14.200418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.200709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.997 [2024-07-24 23:18:14.200730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.997 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.200911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.201223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.201239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.201556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.201694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.201710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.202029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.202200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.202216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.202457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.202760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.202801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.203192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.203432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.203471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.203795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.204091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.204130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.204422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.204646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.204661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.204975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.205323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.205363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.205734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.206028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.206067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.206379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.206727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.206767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.207060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.207332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.207371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.207669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.208052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.208093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.208482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.208759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.208775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.208884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.209140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.209178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.209489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.209827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.209867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.210183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.210458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.210498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.210734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.211036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.211089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.211484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.211764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.211781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.212092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.212477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.212516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.212817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.213069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.213111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.213466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.213755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.213795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.214150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.214493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.214532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.214820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.215185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.215224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.998 qpair failed and we were unable to recover it. 00:32:41.998 [2024-07-24 23:18:14.215574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.998 [2024-07-24 23:18:14.215791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.215831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.216080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.216354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.216393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.216704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.216937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.216976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.217162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.217384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.217423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.217803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.218161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.218199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.218504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.218738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.218778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.219059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.219355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.219395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.219700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.219985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.220002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.220330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.220711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.220761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.221069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.221345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.221384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.221771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.222005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.222044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.222370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.222733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.222772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.223107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.223450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.223488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.223787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.224131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.224169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.224493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.224806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.224846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.225199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.225564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.225603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.226007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.226328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.226367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.226732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.227073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.227112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.227463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.227778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.227817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.228121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.228450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.228490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.228805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.229146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.229185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.229581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.229808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.229848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.230149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.230374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.230413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.230725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.230989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.231005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.231323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.231503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.231519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.231763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.232009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.232025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.232278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.232592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.232631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.232925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.233267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.233306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:41.999 qpair failed and we were unable to recover it. 00:32:41.999 [2024-07-24 23:18:14.233673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.999 [2024-07-24 23:18:14.234046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.234087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.234302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.234579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.234618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.234970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.235187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.235225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.235579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.235925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.235965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.236247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.236594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.236633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.236923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.237206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.237223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.237577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.237969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.238009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.238369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.238643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.238682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.238975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.239251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.239290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.239577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.239897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.239935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.240124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.240292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.240331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.240613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.240895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.240911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.241169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.241406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.241422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.241750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.242038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.242077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.242295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.242668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.242708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.242936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.243233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.243272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.243587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.243740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.243780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.244072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.244318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.244352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.244671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.244983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.245024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.245414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.245702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.245753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.246033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.246401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.246440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.246690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.247065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.247104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.247340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.247704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.247754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.248107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.248402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.248441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.248764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.249160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.249199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.249569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.249801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.249817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.250072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.250437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.250476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.000 [2024-07-24 23:18:14.250832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.251051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.000 [2024-07-24 23:18:14.251090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.000 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.251395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.251703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.251750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.252038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.252382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.252421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.252660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.252942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.252982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.253377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.253763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.253803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.254047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.254296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.254334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.254709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.255086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.255125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.255455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.255758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.255803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.256104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.256467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.256505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.256871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.257153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.257192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.257556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.257920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.257960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.258335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.258612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.258651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.258908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.259209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.259247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.259564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.259865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.259905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.260143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.260455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.260494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.260872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.261248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.261287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.261591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.261868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.261892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.262196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.262475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.262520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.262748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.263093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.263131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.263436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.263732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.263772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.264076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.264438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.264477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.264784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.265001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.265039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.265390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.265752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.265791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.266067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.266296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.266312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.266564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.266792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.266808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.267048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.267409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.267448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.267799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.268161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.268178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.268382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.268752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.268797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.001 [2024-07-24 23:18:14.269099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.269304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.001 [2024-07-24 23:18:14.269344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.001 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.269579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.269966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.270006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.270334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.270627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.270665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.270975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.271175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.271191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.271547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.271843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.271882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.272146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.272418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.272457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.272759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.273030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.273069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.273397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.273757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.273798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.274154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.274427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.274465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.274739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.275050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.275094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.275392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.275760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.275800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.276087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.276387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.276426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.276580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.276817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.276856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.277172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.277457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.277495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.277712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.278068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.278107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.278414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.278704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.278753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.279037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.279325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.279341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.279641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.279915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.279955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.280259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.280494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.280533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.280883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.281172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.281211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.281451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.281802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.281843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.282066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.282420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.282460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.282742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.282981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.283020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.283335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.283509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.283525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.002 [2024-07-24 23:18:14.283784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.284161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.002 [2024-07-24 23:18:14.284200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.002 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.284371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.284741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.284781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.285004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.285327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.285366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.285738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.286078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.286116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.286418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.286771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.286810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.287107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.287370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.287387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.287637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.287835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.287875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.288182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.288522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.288561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.288892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.289233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.289272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.289573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.289913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.289953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.290201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.290563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.290602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.290933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.291284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.291322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.291632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.291923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.291964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.292285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.292652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.292691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.292921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.293200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.293216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.293469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.293776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.293816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.294126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.294338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.294377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.294732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.295006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.295045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.295345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.295641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.295658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.296000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.296312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.296351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.296654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.297034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.297074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.297389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.297754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.297794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.298169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.298554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.298592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.298751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.299037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.299086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.299397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.299737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.299777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.300148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.300445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.300484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.300853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.301223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.301262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.301587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.301892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.301931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.302342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.302727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.003 [2024-07-24 23:18:14.302768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.003 qpair failed and we were unable to recover it. 00:32:42.003 [2024-07-24 23:18:14.303124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.303409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.303448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.303809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.304178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.304217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.304583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.304862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.304902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.305261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.305534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.305573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.305924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.306209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.306247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.306621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.306912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.306952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.307341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.307642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.307681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.308048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.308388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.308426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.308739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.309015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.309053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.309384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.309600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.309638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.309997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.310364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.310402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.310696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.310998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.311029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.311337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.311570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.311609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.311985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.312258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.312297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.312698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.312910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.312949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.313232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.313517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.313556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.313930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.314190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.314230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.314520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.314744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.314784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.315141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.315501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.315540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.315821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.316192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.316231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.316515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.316858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.316898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.317216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.317517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.317556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.317790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.318089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.318129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.318500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.318774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.318815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.319061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.319428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.319467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.319829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.320049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.320093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.320403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.320675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.320723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.321106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.321473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.321512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.004 qpair failed and we were unable to recover it. 00:32:42.004 [2024-07-24 23:18:14.321813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.004 [2024-07-24 23:18:14.322180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.322219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.322613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.322907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.322923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.323118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.323301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.323340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.323573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.323877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.323917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.324207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.324500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.324539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.324914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.325200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.325239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.325521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.325883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.325923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.326205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.326492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.326530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.326835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.327196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.327235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.327523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.327751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.327791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.328079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.328436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.328475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.328777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.329166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.329205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.329489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.329859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.329900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.330134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.330353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.330393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.330775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.330998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.331014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.331268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.331566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.331605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.331879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.332065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.332104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.332414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.332792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.332832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.333154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.333500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.333539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.333783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.334133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.334173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.334391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.334688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.334734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.335055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.335394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.335433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.335805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.336189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.336228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.336614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.336892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.336931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.337089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.337452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.337491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.337862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.338078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.338117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.338485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.338775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.338815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.339111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.339448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.339487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.005 [2024-07-24 23:18:14.339863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.340162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.005 [2024-07-24 23:18:14.340212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.005 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.340589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.340947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.340987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.341377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.341668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.341707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.342001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.342288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.342328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.342579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.342919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.342959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.343190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.343552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.343591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.343942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.344308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.344347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.344516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.344880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.344920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.345295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.345636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.345675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.346103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.346511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.346549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.346836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.347188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.347227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.347620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.347958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.347999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.348291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.348468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.348507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.348794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.349084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.349122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.349428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.349651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.349690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.349999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.350353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.350392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.350700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.350954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.350993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.351151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.351393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.351432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.351668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.352019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.352059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.352384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.352770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.352810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.353176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.353465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.353503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.353861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.354115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.354154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.354491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.354743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.354763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.355066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.355381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.355420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.355793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.356068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.356107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.356330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.356712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.356761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.357115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.357454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.357493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.357708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.358008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.358047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.006 [2024-07-24 23:18:14.358402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.358764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.006 [2024-07-24 23:18:14.358805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.006 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.359088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.359435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.359474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.359849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.360146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.360184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.360496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.360778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.360823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.361194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.361468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.361507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.361878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.362228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.362267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.362648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.362943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.362982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.363361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.363703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.363752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.364028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.364295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.364334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.364552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.364794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.364834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.365127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.365514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.365553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.365837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.366152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.366192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.366411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.366751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.366791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.367169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.367521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.367539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.367797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.368120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.368159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.368514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.368879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.368919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.369239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.369512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.369550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.369909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.370182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.370221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.370453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.370755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.370796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.371099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.371396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.371436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.371727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.371948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.371987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.372306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.372596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.372635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.373021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.373247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.373287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.373682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.373987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.374033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.374388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.374746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.374786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.007 qpair failed and we were unable to recover it. 00:32:42.007 [2024-07-24 23:18:14.375096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.007 [2024-07-24 23:18:14.375382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.375421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.375788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.376128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.376166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.376472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.376828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.376867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.377182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.377549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.377589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.377826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.378168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.378207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.378580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.378810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.378851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.379161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.379510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.379549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.379914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.380149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.380187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.380413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.380776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.380821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.381193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.381474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.381490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.381670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.381826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.381842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.382162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.382509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.382547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.382861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.383157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.383196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.383560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.383845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.383895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.384062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.384242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.384281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.384632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.384933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.384973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.385361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.385648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.385687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.386021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.386307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.386346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.386671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.386971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.387011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.387301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.387671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.387710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.387941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.388305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.388344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.388629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.388917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.388957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.389143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.389394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.389433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.389734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.389970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.390009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.390382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.390724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.390740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.391016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.391261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.391277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.391529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.391835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.391852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.392080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.392381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.392420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.008 qpair failed and we were unable to recover it. 00:32:42.008 [2024-07-24 23:18:14.392656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.008 [2024-07-24 23:18:14.392825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.392865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.393226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.393520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.393559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.393867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.394171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.394187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.394510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.394776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.394792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.395023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.395279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.395295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.395616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.395859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.395875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.395990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.396306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.396322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.396590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.396785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.396824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.397106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.397392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.397431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.397740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.398025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.398041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.398341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.398516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.398532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.398798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.398978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.398994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.399225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.399551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.399591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.399945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.400148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.400164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.400470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.400753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.400793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.401004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.401297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.401313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.401624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.401962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.401978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.402227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.402427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.402466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.402736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.403056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.403095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.403522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.403820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.403859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.404086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.404373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.404421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.404665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.404983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.405000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.405337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.405687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.405735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.406125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.406469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.406507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.406905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.407269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.407308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.407693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.407950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.407989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.408197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.408445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.408483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.408782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.409071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.409109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.009 qpair failed and we were unable to recover it. 00:32:42.009 [2024-07-24 23:18:14.409367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.009 [2024-07-24 23:18:14.409599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.010 [2024-07-24 23:18:14.409623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.010 qpair failed and we were unable to recover it. 00:32:42.010 [2024-07-24 23:18:14.409880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.010 [2024-07-24 23:18:14.410122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.010 [2024-07-24 23:18:14.410139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.010 qpair failed and we were unable to recover it. 00:32:42.010 [2024-07-24 23:18:14.410257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.010 [2024-07-24 23:18:14.410429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.010 [2024-07-24 23:18:14.410445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.010 qpair failed and we were unable to recover it. 00:32:42.010 [2024-07-24 23:18:14.410687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.010 [2024-07-24 23:18:14.410903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.010 [2024-07-24 23:18:14.410942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.010 qpair failed and we were unable to recover it. 00:32:42.010 [2024-07-24 23:18:14.411243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.010 [2024-07-24 23:18:14.411592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.010 [2024-07-24 23:18:14.411608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.010 qpair failed and we were unable to recover it. 00:32:42.010 [2024-07-24 23:18:14.411837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.010 [2024-07-24 23:18:14.412165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.010 [2024-07-24 23:18:14.412184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.010 qpair failed and we were unable to recover it. 00:32:42.010 [2024-07-24 23:18:14.412441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.010 [2024-07-24 23:18:14.412676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.010 [2024-07-24 23:18:14.412748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.010 qpair failed and we were unable to recover it. 00:32:42.010 [2024-07-24 23:18:14.412983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.010 [2024-07-24 23:18:14.413201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.010 [2024-07-24 23:18:14.413240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.010 qpair failed and we were unable to recover it. 00:32:42.010 [2024-07-24 23:18:14.413595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.010 [2024-07-24 23:18:14.413911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.010 [2024-07-24 23:18:14.413968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.010 qpair failed and we were unable to recover it. 00:32:42.010 [2024-07-24 23:18:14.414311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.277 [2024-07-24 23:18:14.414639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.277 [2024-07-24 23:18:14.414655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.277 qpair failed and we were unable to recover it. 00:32:42.277 [2024-07-24 23:18:14.414910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.277 [2024-07-24 23:18:14.415177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.277 [2024-07-24 23:18:14.415193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.277 qpair failed and we were unable to recover it. 00:32:42.277 [2024-07-24 23:18:14.415385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.277 [2024-07-24 23:18:14.415628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.277 [2024-07-24 23:18:14.415645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.277 qpair failed and we were unable to recover it. 00:32:42.277 [2024-07-24 23:18:14.415819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.416085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.416101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.416372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.416544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.416559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.416736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.416976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.416992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.417381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.417665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.417704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.418090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.418386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.418425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.418777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.419141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.419180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.419495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.419796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.419836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.420120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.420433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.420472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.420872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.421202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.421241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.421600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.421978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.422019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.422314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.422674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.422712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.423113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.423296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.423312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.423650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.423808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.423847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.424141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.424394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.424409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.424639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.424942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.424982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.425222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.425563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.425602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.425981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.426253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.426269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.426567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.426882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.426898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.427203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.427451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.427490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.427805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.428081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.428130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.428294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.428613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.428652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.428991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.429361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.429400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.429699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.429959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.429998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.278 qpair failed and we were unable to recover it. 00:32:42.278 [2024-07-24 23:18:14.430245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.278 [2024-07-24 23:18:14.430587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.430626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.279 qpair failed and we were unable to recover it. 00:32:42.279 [2024-07-24 23:18:14.430923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.431298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.431336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.279 qpair failed and we were unable to recover it. 00:32:42.279 [2024-07-24 23:18:14.431631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.431923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.431963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.279 qpair failed and we were unable to recover it. 00:32:42.279 [2024-07-24 23:18:14.432313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.432585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.432624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.279 qpair failed and we were unable to recover it. 00:32:42.279 [2024-07-24 23:18:14.432949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.433270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.433308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.279 qpair failed and we were unable to recover it. 00:32:42.279 [2024-07-24 23:18:14.433525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.433887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.433927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.279 qpair failed and we were unable to recover it. 00:32:42.279 [2024-07-24 23:18:14.434281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.434644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.434683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.279 qpair failed and we were unable to recover it. 00:32:42.279 [2024-07-24 23:18:14.435001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.435236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.435274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.279 qpair failed and we were unable to recover it. 00:32:42.279 [2024-07-24 23:18:14.435626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.435922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.435963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.279 qpair failed and we were unable to recover it. 00:32:42.279 [2024-07-24 23:18:14.436283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.436597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.436636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.279 qpair failed and we were unable to recover it. 00:32:42.279 [2024-07-24 23:18:14.436881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.437155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.437194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.279 qpair failed and we were unable to recover it. 00:32:42.279 [2024-07-24 23:18:14.437568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.437936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.437975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.279 qpair failed and we were unable to recover it. 00:32:42.279 [2024-07-24 23:18:14.438259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.438542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.438582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.279 qpair failed and we were unable to recover it. 00:32:42.279 [2024-07-24 23:18:14.438937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.439223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.439263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.279 qpair failed and we were unable to recover it. 00:32:42.279 [2024-07-24 23:18:14.439566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.439925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.439965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.279 qpair failed and we were unable to recover it. 00:32:42.279 [2024-07-24 23:18:14.440258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.440484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.440524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.279 qpair failed and we were unable to recover it. 00:32:42.279 [2024-07-24 23:18:14.440828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.441163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.441184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.279 qpair failed and we were unable to recover it. 00:32:42.279 [2024-07-24 23:18:14.441368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.441671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.441729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.279 qpair failed and we were unable to recover it. 00:32:42.279 [2024-07-24 23:18:14.442026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.442419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.442438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.279 qpair failed and we were unable to recover it. 00:32:42.279 [2024-07-24 23:18:14.442641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.442818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.442835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.279 qpair failed and we were unable to recover it. 00:32:42.279 [2024-07-24 23:18:14.443162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.279 [2024-07-24 23:18:14.443525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.443563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.280 qpair failed and we were unable to recover it. 00:32:42.280 [2024-07-24 23:18:14.443742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.444110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.444149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.280 qpair failed and we were unable to recover it. 00:32:42.280 [2024-07-24 23:18:14.444399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.444542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.444557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.280 qpair failed and we were unable to recover it. 00:32:42.280 [2024-07-24 23:18:14.444755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.445044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.445083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.280 qpair failed and we were unable to recover it. 00:32:42.280 [2024-07-24 23:18:14.445365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.445705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.445752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.280 qpair failed and we were unable to recover it. 00:32:42.280 [2024-07-24 23:18:14.446057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.446249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.446265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.280 qpair failed and we were unable to recover it. 00:32:42.280 [2024-07-24 23:18:14.446532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.446803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.446843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.280 qpair failed and we were unable to recover it. 00:32:42.280 [2024-07-24 23:18:14.447209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.447550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.447589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.280 qpair failed and we were unable to recover it. 00:32:42.280 [2024-07-24 23:18:14.447980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.448366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.448405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.280 qpair failed and we were unable to recover it. 00:32:42.280 [2024-07-24 23:18:14.448686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.449049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.449089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.280 qpair failed and we were unable to recover it. 00:32:42.280 [2024-07-24 23:18:14.449394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.449733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.449774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.280 qpair failed and we were unable to recover it. 00:32:42.280 [2024-07-24 23:18:14.450074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.450353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.450369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.280 qpair failed and we were unable to recover it. 00:32:42.280 [2024-07-24 23:18:14.450673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.451067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.451107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.280 qpair failed and we were unable to recover it. 00:32:42.280 [2024-07-24 23:18:14.451451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.451763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.451804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.280 qpair failed and we were unable to recover it. 00:32:42.280 [2024-07-24 23:18:14.452127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.452469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.452508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.280 qpair failed and we were unable to recover it. 00:32:42.280 [2024-07-24 23:18:14.452850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.453144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.453182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.280 qpair failed and we were unable to recover it. 00:32:42.280 [2024-07-24 23:18:14.453543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.453769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.453809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.280 qpair failed and we were unable to recover it. 00:32:42.280 [2024-07-24 23:18:14.454185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.454479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.454517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.280 qpair failed and we were unable to recover it. 00:32:42.280 [2024-07-24 23:18:14.454818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.455182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.455226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.280 qpair failed and we were unable to recover it. 00:32:42.280 [2024-07-24 23:18:14.455591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.455795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.455835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.280 qpair failed and we were unable to recover it. 00:32:42.280 [2024-07-24 23:18:14.456069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.280 [2024-07-24 23:18:14.456355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.456393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.281 qpair failed and we were unable to recover it. 00:32:42.281 [2024-07-24 23:18:14.456664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.456987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.457028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.281 qpair failed and we were unable to recover it. 00:32:42.281 [2024-07-24 23:18:14.457281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.457645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.457684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.281 qpair failed and we were unable to recover it. 00:32:42.281 [2024-07-24 23:18:14.458048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.458277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.458292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.281 qpair failed and we were unable to recover it. 00:32:42.281 [2024-07-24 23:18:14.458615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.458916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.458956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.281 qpair failed and we were unable to recover it. 00:32:42.281 [2024-07-24 23:18:14.459190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.459531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.459570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.281 qpair failed and we were unable to recover it. 00:32:42.281 [2024-07-24 23:18:14.459949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.460308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.460347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.281 qpair failed and we were unable to recover it. 00:32:42.281 [2024-07-24 23:18:14.460676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.460966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.461006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.281 qpair failed and we were unable to recover it. 00:32:42.281 [2024-07-24 23:18:14.461356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.461627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.461671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.281 qpair failed and we were unable to recover it. 00:32:42.281 [2024-07-24 23:18:14.461984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.462255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.462293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.281 qpair failed and we were unable to recover it. 00:32:42.281 [2024-07-24 23:18:14.462669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.462962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.463002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.281 qpair failed and we were unable to recover it. 00:32:42.281 [2024-07-24 23:18:14.463303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.463657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.463696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.281 qpair failed and we were unable to recover it. 00:32:42.281 [2024-07-24 23:18:14.464006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.464348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.464386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.281 qpair failed and we were unable to recover it. 00:32:42.281 [2024-07-24 23:18:14.464747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.465111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.465149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.281 qpair failed and we were unable to recover it. 00:32:42.281 [2024-07-24 23:18:14.465396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.465635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.465673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.281 qpair failed and we were unable to recover it. 00:32:42.281 [2024-07-24 23:18:14.465975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.466269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.466308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.281 qpair failed and we were unable to recover it. 00:32:42.281 [2024-07-24 23:18:14.466589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.466905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.466945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.281 qpair failed and we were unable to recover it. 00:32:42.281 [2024-07-24 23:18:14.467248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.467541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.467580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.281 qpair failed and we were unable to recover it. 00:32:42.281 [2024-07-24 23:18:14.467834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.467988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.468033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.281 qpair failed and we were unable to recover it. 00:32:42.281 [2024-07-24 23:18:14.468312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.468491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.468506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.281 qpair failed and we were unable to recover it. 00:32:42.281 [2024-07-24 23:18:14.468676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.281 [2024-07-24 23:18:14.468937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.468978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.282 qpair failed and we were unable to recover it. 00:32:42.282 [2024-07-24 23:18:14.469358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.469653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.469693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.282 qpair failed and we were unable to recover it. 00:32:42.282 [2024-07-24 23:18:14.470023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.470389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.470428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.282 qpair failed and we were unable to recover it. 00:32:42.282 [2024-07-24 23:18:14.470735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.471028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.471067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.282 qpair failed and we were unable to recover it. 00:32:42.282 [2024-07-24 23:18:14.471414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.471779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.471820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.282 qpair failed and we were unable to recover it. 00:32:42.282 [2024-07-24 23:18:14.472194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.472543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.472559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.282 qpair failed and we were unable to recover it. 00:32:42.282 [2024-07-24 23:18:14.472755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.473077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.473115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.282 qpair failed and we were unable to recover it. 00:32:42.282 [2024-07-24 23:18:14.473497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.473860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.473900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.282 qpair failed and we were unable to recover it. 00:32:42.282 [2024-07-24 23:18:14.474222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.474565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.474609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.282 qpair failed and we were unable to recover it. 00:32:42.282 [2024-07-24 23:18:14.474995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.475360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.475399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.282 qpair failed and we were unable to recover it. 00:32:42.282 [2024-07-24 23:18:14.475726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.475999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.476038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.282 qpair failed and we were unable to recover it. 00:32:42.282 [2024-07-24 23:18:14.476391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.476661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.476700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.282 qpair failed and we were unable to recover it. 00:32:42.282 [2024-07-24 23:18:14.476982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.477346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.477386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.282 qpair failed and we were unable to recover it. 00:32:42.282 [2024-07-24 23:18:14.477683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.477970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.478010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.282 qpair failed and we were unable to recover it. 00:32:42.282 [2024-07-24 23:18:14.478223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.478573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.478611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.282 qpair failed and we were unable to recover it. 00:32:42.282 [2024-07-24 23:18:14.478845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.479116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.479154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.282 qpair failed and we were unable to recover it. 00:32:42.282 [2024-07-24 23:18:14.479453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.479622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.479638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.282 qpair failed and we were unable to recover it. 00:32:42.282 [2024-07-24 23:18:14.479893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.480179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.480218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.282 qpair failed and we were unable to recover it. 00:32:42.282 [2024-07-24 23:18:14.480453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.480768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.480784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.282 qpair failed and we were unable to recover it. 00:32:42.282 [2024-07-24 23:18:14.481044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.481431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.481471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.282 qpair failed and we were unable to recover it. 00:32:42.282 [2024-07-24 23:18:14.481850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.282 [2024-07-24 23:18:14.482223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.482262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.283 qpair failed and we were unable to recover it. 00:32:42.283 [2024-07-24 23:18:14.482558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.482926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.482967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.283 qpair failed and we were unable to recover it. 00:32:42.283 [2024-07-24 23:18:14.483269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.483616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.483655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.283 qpair failed and we were unable to recover it. 00:32:42.283 [2024-07-24 23:18:14.483977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.484257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.484296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.283 qpair failed and we were unable to recover it. 00:32:42.283 [2024-07-24 23:18:14.484664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.485069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.485109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.283 qpair failed and we were unable to recover it. 00:32:42.283 [2024-07-24 23:18:14.485484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.485834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.485874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.283 qpair failed and we were unable to recover it. 00:32:42.283 [2024-07-24 23:18:14.486179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.486463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.486503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.283 qpair failed and we were unable to recover it. 00:32:42.283 [2024-07-24 23:18:14.486854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.487149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.487188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.283 qpair failed and we were unable to recover it. 00:32:42.283 [2024-07-24 23:18:14.487496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.487790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.487829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.283 qpair failed and we were unable to recover it. 00:32:42.283 [2024-07-24 23:18:14.488138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.488445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.488484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.283 qpair failed and we were unable to recover it. 00:32:42.283 [2024-07-24 23:18:14.488833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.489139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.489177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.283 qpair failed and we were unable to recover it. 00:32:42.283 [2024-07-24 23:18:14.489480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.489791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.489830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.283 qpair failed and we were unable to recover it. 00:32:42.283 [2024-07-24 23:18:14.490121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.490482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.490521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.283 qpair failed and we were unable to recover it. 00:32:42.283 [2024-07-24 23:18:14.490818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.491113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.491152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.283 qpair failed and we were unable to recover it. 00:32:42.283 [2024-07-24 23:18:14.491447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.491814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.491854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.283 qpair failed and we were unable to recover it. 00:32:42.283 [2024-07-24 23:18:14.492222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.492567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.492606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.283 qpair failed and we were unable to recover it. 00:32:42.283 [2024-07-24 23:18:14.492988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.493347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.493386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.283 qpair failed and we were unable to recover it. 00:32:42.283 [2024-07-24 23:18:14.493693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.493936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.493952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.283 qpair failed and we were unable to recover it. 00:32:42.283 [2024-07-24 23:18:14.494261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.494558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.494596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.283 qpair failed and we were unable to recover it. 00:32:42.283 [2024-07-24 23:18:14.494988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.495347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.495386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.283 qpair failed and we were unable to recover it. 00:32:42.283 [2024-07-24 23:18:14.495791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.496078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.496117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.283 qpair failed and we were unable to recover it. 00:32:42.283 [2024-07-24 23:18:14.496398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.496632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.283 [2024-07-24 23:18:14.496671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.284 qpair failed and we were unable to recover it. 00:32:42.284 [2024-07-24 23:18:14.496962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.497253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.497293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.284 qpair failed and we were unable to recover it. 00:32:42.284 [2024-07-24 23:18:14.497604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.497824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.497840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.284 qpair failed and we were unable to recover it. 00:32:42.284 [2024-07-24 23:18:14.498122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.498336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.498375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.284 qpair failed and we were unable to recover it. 00:32:42.284 [2024-07-24 23:18:14.498685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.498925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.498941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.284 qpair failed and we were unable to recover it. 00:32:42.284 [2024-07-24 23:18:14.499264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.499586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.499625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.284 qpair failed and we were unable to recover it. 00:32:42.284 [2024-07-24 23:18:14.499858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.500223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.500262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.284 qpair failed and we were unable to recover it. 00:32:42.284 [2024-07-24 23:18:14.500549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.500856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.500901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.284 qpair failed and we were unable to recover it. 00:32:42.284 [2024-07-24 23:18:14.501128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.501471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.501510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.284 qpair failed and we were unable to recover it. 00:32:42.284 [2024-07-24 23:18:14.501816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.502177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.502216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.284 qpair failed and we were unable to recover it. 00:32:42.284 [2024-07-24 23:18:14.502521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.502815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.502855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.284 qpair failed and we were unable to recover it. 00:32:42.284 [2024-07-24 23:18:14.503233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.503574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.503613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.284 qpair failed and we were unable to recover it. 00:32:42.284 [2024-07-24 23:18:14.503926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.504215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.504256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.284 qpair failed and we were unable to recover it. 00:32:42.284 [2024-07-24 23:18:14.504559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.504802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.504841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.284 qpair failed and we were unable to recover it. 00:32:42.284 [2024-07-24 23:18:14.505140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.505413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.505452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.284 qpair failed and we were unable to recover it. 00:32:42.284 [2024-07-24 23:18:14.505768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.506040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.506079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.284 qpair failed and we were unable to recover it. 00:32:42.284 [2024-07-24 23:18:14.506477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.506627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.506666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.284 qpair failed and we were unable to recover it. 00:32:42.284 [2024-07-24 23:18:14.507033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.507359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.507398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.284 qpair failed and we were unable to recover it. 00:32:42.284 [2024-07-24 23:18:14.507736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.284 [2024-07-24 23:18:14.508032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.508071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.508445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.508760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.508802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.509087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.509364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.509403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.509753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.510030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.510068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.510288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.510534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.510579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.510822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.511060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.511076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.511353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.511594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.511637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.511940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.512228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.512267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.512475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.512683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.512734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.513107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.513315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.513353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.513704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.514008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.514047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.514349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.514728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.514768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.515135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.515473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.515512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.515798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.516112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.516151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.516395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.516699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.516765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.517140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.517433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.517472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.517832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.518128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.518167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.518421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.518792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.518831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.519212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.519441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.519480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.519824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.520097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.520136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.520450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.520811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.520851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.521160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.521379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.521418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.521787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.522153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.522192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.522575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.522939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.522979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.523376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.523644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.523660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.285 qpair failed and we were unable to recover it. 00:32:42.285 [2024-07-24 23:18:14.523907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.524129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.285 [2024-07-24 23:18:14.524145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.286 qpair failed and we were unable to recover it. 00:32:42.286 [2024-07-24 23:18:14.524406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.524689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.524749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.286 qpair failed and we were unable to recover it. 00:32:42.286 [2024-07-24 23:18:14.525077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.525351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.525389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.286 qpair failed and we were unable to recover it. 00:32:42.286 [2024-07-24 23:18:14.525750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.526053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.526092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.286 qpair failed and we were unable to recover it. 00:32:42.286 [2024-07-24 23:18:14.526400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.526632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.526671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.286 qpair failed and we were unable to recover it. 00:32:42.286 [2024-07-24 23:18:14.527007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.527368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.527408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.286 qpair failed and we were unable to recover it. 00:32:42.286 [2024-07-24 23:18:14.527793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.528024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.528063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.286 qpair failed and we were unable to recover it. 00:32:42.286 [2024-07-24 23:18:14.528418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.528781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.528821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.286 qpair failed and we were unable to recover it. 00:32:42.286 [2024-07-24 23:18:14.529173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.529535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.529574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.286 qpair failed and we were unable to recover it. 00:32:42.286 [2024-07-24 23:18:14.529922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.530283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.530322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.286 qpair failed and we were unable to recover it. 00:32:42.286 [2024-07-24 23:18:14.530678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.530953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.530969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.286 qpair failed and we were unable to recover it. 00:32:42.286 [2024-07-24 23:18:14.531146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.531497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.531536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.286 qpair failed and we were unable to recover it. 00:32:42.286 [2024-07-24 23:18:14.531836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.532181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.532219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.286 qpair failed and we were unable to recover it. 00:32:42.286 [2024-07-24 23:18:14.532596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.532939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.532979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.286 qpair failed and we were unable to recover it. 00:32:42.286 [2024-07-24 23:18:14.533262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.533482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.533521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.286 qpair failed and we were unable to recover it. 00:32:42.286 [2024-07-24 23:18:14.533873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.534174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.534213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.286 qpair failed and we were unable to recover it. 00:32:42.286 [2024-07-24 23:18:14.534582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.534802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.534842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.286 qpair failed and we were unable to recover it. 00:32:42.286 [2024-07-24 23:18:14.535126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.535508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.535547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.286 qpair failed and we were unable to recover it. 00:32:42.286 [2024-07-24 23:18:14.535765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.535954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.535993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.286 qpair failed and we were unable to recover it. 00:32:42.286 [2024-07-24 23:18:14.536368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.536653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.536692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.286 qpair failed and we were unable to recover it. 00:32:42.286 [2024-07-24 23:18:14.536983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.537350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.286 [2024-07-24 23:18:14.537389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.286 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.537625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.537915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.537955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.538260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.538551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.538590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.538889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.539233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.539271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.539623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.539987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.540028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.540333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.540630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.540669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.541047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.541286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.541329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.541691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.541991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.542030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.542367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.542708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.542755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.543153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.543434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.543473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.543844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.544213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.544251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.544614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.544940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.544980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.545281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.545551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.545590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.545916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.546198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.546237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.546616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.546897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.546913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.547252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.547570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.547609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.547818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.548072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.548111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.548396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.548751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.548791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.549164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.549455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.549494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.549870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.550212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.550250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.550649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.550948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.550964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.551310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.551642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.551681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.552076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.552367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.287 [2024-07-24 23:18:14.552406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.287 qpair failed and we were unable to recover it. 00:32:42.287 [2024-07-24 23:18:14.552697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.552935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.552975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.288 qpair failed and we were unable to recover it. 00:32:42.288 [2024-07-24 23:18:14.553326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.553600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.553639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.288 qpair failed and we were unable to recover it. 00:32:42.288 [2024-07-24 23:18:14.554007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.554356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.554395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.288 qpair failed and we were unable to recover it. 00:32:42.288 [2024-07-24 23:18:14.554777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.555055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.555093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.288 qpair failed and we were unable to recover it. 00:32:42.288 [2024-07-24 23:18:14.555413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.555616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.555655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.288 qpair failed and we were unable to recover it. 00:32:42.288 [2024-07-24 23:18:14.555953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.556251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.556290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.288 qpair failed and we were unable to recover it. 00:32:42.288 [2024-07-24 23:18:14.556543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.556886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.556925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.288 qpair failed and we were unable to recover it. 00:32:42.288 [2024-07-24 23:18:14.557274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.557499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.557538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.288 qpair failed and we were unable to recover it. 00:32:42.288 [2024-07-24 23:18:14.557767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.557954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.557970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.288 qpair failed and we were unable to recover it. 00:32:42.288 [2024-07-24 23:18:14.558322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.558544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.558583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.288 qpair failed and we were unable to recover it. 00:32:42.288 [2024-07-24 23:18:14.558910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.559135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.559174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.288 qpair failed and we were unable to recover it. 00:32:42.288 [2024-07-24 23:18:14.559425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.559768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.559809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.288 qpair failed and we were unable to recover it. 00:32:42.288 [2024-07-24 23:18:14.560182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.560571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.560615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.288 qpair failed and we were unable to recover it. 00:32:42.288 [2024-07-24 23:18:14.560989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.561321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.561360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.288 qpair failed and we were unable to recover it. 00:32:42.288 [2024-07-24 23:18:14.561665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.561883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.561922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.288 qpair failed and we were unable to recover it. 00:32:42.288 [2024-07-24 23:18:14.562282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.562498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.562536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.288 qpair failed and we were unable to recover it. 00:32:42.288 [2024-07-24 23:18:14.562762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.562948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.562964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.288 qpair failed and we were unable to recover it. 00:32:42.288 [2024-07-24 23:18:14.563332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.563625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.563664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.288 qpair failed and we were unable to recover it. 00:32:42.288 [2024-07-24 23:18:14.564056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.564374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.564413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.288 qpair failed and we were unable to recover it. 00:32:42.288 [2024-07-24 23:18:14.564786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.565152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.288 [2024-07-24 23:18:14.565191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.288 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.565477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.565817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.565857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.289 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.566216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.566510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.566549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.289 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.566819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.567058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.567076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.289 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.567395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.567651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.567689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.289 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.567954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.568296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.568334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.289 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.568618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.568915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.568955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.289 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.569184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.569548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.569587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.289 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.569914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.570259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.570298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.289 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.570648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.571004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.571044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.289 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.571268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.571494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.571533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.289 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.571871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.572233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.572271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.289 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.572590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.572951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.572991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.289 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.573373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.573667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.573711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.289 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.574125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.574457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.574495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.289 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.574886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.575272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.575311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.289 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.575655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.575938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.575977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.289 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.576282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.576621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.576660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.289 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.577057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.577345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.577384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.289 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.577760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.578038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.578077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.289 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.578363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.578672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.578711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.289 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.579094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.579407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.579446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.289 qpair failed and we were unable to recover it. 00:32:42.289 [2024-07-24 23:18:14.579749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.580089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.289 [2024-07-24 23:18:14.580128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.290 qpair failed and we were unable to recover it. 00:32:42.290 [2024-07-24 23:18:14.580429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.580769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.580814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.290 qpair failed and we were unable to recover it. 00:32:42.290 [2024-07-24 23:18:14.581051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.581352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.581391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.290 qpair failed and we were unable to recover it. 00:32:42.290 [2024-07-24 23:18:14.581682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.582002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.582041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.290 qpair failed and we were unable to recover it. 00:32:42.290 [2024-07-24 23:18:14.582421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.582693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.582741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.290 qpair failed and we were unable to recover it. 00:32:42.290 [2024-07-24 23:18:14.582978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.583249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.583288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.290 qpair failed and we were unable to recover it. 00:32:42.290 [2024-07-24 23:18:14.583584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.583825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.583842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.290 qpair failed and we were unable to recover it. 00:32:42.290 [2024-07-24 23:18:14.584095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.584477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.584515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.290 qpair failed and we were unable to recover it. 00:32:42.290 [2024-07-24 23:18:14.584824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.585070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.585086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.290 qpair failed and we were unable to recover it. 00:32:42.290 [2024-07-24 23:18:14.585427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.585698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.585744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.290 qpair failed and we were unable to recover it. 00:32:42.290 [2024-07-24 23:18:14.586053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.586343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.586382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.290 qpair failed and we were unable to recover it. 00:32:42.290 [2024-07-24 23:18:14.586684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.586877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.586917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.290 qpair failed and we were unable to recover it. 00:32:42.290 [2024-07-24 23:18:14.587292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.587583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.587621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.290 qpair failed and we were unable to recover it. 00:32:42.290 [2024-07-24 23:18:14.587929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.588297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.588335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.290 qpair failed and we were unable to recover it. 00:32:42.290 [2024-07-24 23:18:14.588611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.588902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.588942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.290 qpair failed and we were unable to recover it. 00:32:42.290 [2024-07-24 23:18:14.589294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.589637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.589675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.290 qpair failed and we were unable to recover it. 00:32:42.290 [2024-07-24 23:18:14.589986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.590350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.590389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.290 qpair failed and we were unable to recover it. 00:32:42.290 [2024-07-24 23:18:14.590563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.590929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.590969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.290 qpair failed and we were unable to recover it. 00:32:42.290 [2024-07-24 23:18:14.591271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.290 [2024-07-24 23:18:14.591633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.591672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.592058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.592342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.592381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.592679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.593030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.593070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.593369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.593659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.593698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.594030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.594304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.594343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.594658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.594954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.594994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.595306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.595615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.595654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.595955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.596253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.596291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.596661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.596902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.596942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.597299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.597582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.597621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.597906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.598142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.598181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.598475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.598742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.598774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.598993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.599302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.599340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.599702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.600088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.600127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.600485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.600762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.600802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.601176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.601535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.601574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.601884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.602128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.602167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.602463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.602829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.602868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.603142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.603499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.603538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.603782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.604078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.604117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.604338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.604697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.604744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.604976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.605296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.605335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.605713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.605923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.605939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.291 qpair failed and we were unable to recover it. 00:32:42.291 [2024-07-24 23:18:14.606194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.606521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.291 [2024-07-24 23:18:14.606560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.292 [2024-07-24 23:18:14.606868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.607156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.607194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.292 [2024-07-24 23:18:14.607369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.607710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.607758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.292 [2024-07-24 23:18:14.608111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.608489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.608528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.292 [2024-07-24 23:18:14.608849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.609187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.609226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.292 [2024-07-24 23:18:14.609604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.609903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.609943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.292 [2024-07-24 23:18:14.610250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.610524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.610563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.292 [2024-07-24 23:18:14.610860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.611160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.611200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.292 [2024-07-24 23:18:14.611499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.611847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.611888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.292 [2024-07-24 23:18:14.612211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.612435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.612482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.292 [2024-07-24 23:18:14.612732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.612995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.613033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.292 [2024-07-24 23:18:14.613414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.613564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.613602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.292 [2024-07-24 23:18:14.613969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.614173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.614211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.292 [2024-07-24 23:18:14.614595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.614951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.614968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.292 [2024-07-24 23:18:14.615147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.615320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.615335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.292 [2024-07-24 23:18:14.615638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.615847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.615886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.292 [2024-07-24 23:18:14.616095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.616354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.616392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.292 [2024-07-24 23:18:14.616573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.616859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.616899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.292 [2024-07-24 23:18:14.617132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.617426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.617477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.292 [2024-07-24 23:18:14.617663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.617979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.618019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.292 [2024-07-24 23:18:14.618404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.618680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.618696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.292 [2024-07-24 23:18:14.619017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.619378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.292 [2024-07-24 23:18:14.619416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.292 qpair failed and we were unable to recover it. 00:32:42.293 [2024-07-24 23:18:14.619790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.620125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.620164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.293 qpair failed and we were unable to recover it. 00:32:42.293 [2024-07-24 23:18:14.620412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.620684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.620699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.293 qpair failed and we were unable to recover it. 00:32:42.293 [2024-07-24 23:18:14.620958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.621134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.621149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.293 qpair failed and we were unable to recover it. 00:32:42.293 [2024-07-24 23:18:14.621490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.621766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.621805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.293 qpair failed and we were unable to recover it. 00:32:42.293 [2024-07-24 23:18:14.622092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.622448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.622487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.293 qpair failed and we were unable to recover it. 00:32:42.293 [2024-07-24 23:18:14.622805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.623100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.623139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.293 qpair failed and we were unable to recover it. 00:32:42.293 [2024-07-24 23:18:14.623424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.623783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.623799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.293 qpair failed and we were unable to recover it. 00:32:42.293 [2024-07-24 23:18:14.624144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.624432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.624470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.293 qpair failed and we were unable to recover it. 00:32:42.293 [2024-07-24 23:18:14.624867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.625175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.625214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.293 qpair failed and we were unable to recover it. 00:32:42.293 [2024-07-24 23:18:14.625517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.625760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.625776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.293 qpair failed and we were unable to recover it. 00:32:42.293 [2024-07-24 23:18:14.626097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.626326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.626366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.293 qpair failed and we were unable to recover it. 00:32:42.293 [2024-07-24 23:18:14.626738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.626967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.627006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.293 qpair failed and we were unable to recover it. 00:32:42.293 [2024-07-24 23:18:14.627377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.627650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.627689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.293 qpair failed and we were unable to recover it. 00:32:42.293 [2024-07-24 23:18:14.628014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.628301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.628339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.293 qpair failed and we were unable to recover it. 00:32:42.293 [2024-07-24 23:18:14.628644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.628950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.628991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.293 qpair failed and we were unable to recover it. 00:32:42.293 [2024-07-24 23:18:14.629226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.629495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.629534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.293 qpair failed and we were unable to recover it. 00:32:42.293 [2024-07-24 23:18:14.629917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.630311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.630350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.293 qpair failed and we were unable to recover it. 00:32:42.293 [2024-07-24 23:18:14.630590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.630886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.630902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.293 qpair failed and we were unable to recover it. 00:32:42.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3413159 Killed "${NVMF_APP[@]}" "$@" 00:32:42.293 [2024-07-24 23:18:14.631169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.631413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.631429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.293 qpair failed and we were unable to recover it. 00:32:42.293 [2024-07-24 23:18:14.631752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 23:18:14 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:32:42.293 [2024-07-24 23:18:14.632071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.632087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.293 qpair failed and we were unable to recover it. 00:32:42.293 23:18:14 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:42.293 [2024-07-24 23:18:14.632400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 23:18:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:42.293 [2024-07-24 23:18:14.632645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.293 [2024-07-24 23:18:14.632661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.294 qpair failed and we were unable to recover it. 00:32:42.294 23:18:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:42.294 [2024-07-24 23:18:14.632960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 23:18:14 -- common/autotest_common.sh@10 -- # set +x 00:32:42.294 [2024-07-24 23:18:14.633124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.633140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.294 qpair failed and we were unable to recover it. 00:32:42.294 [2024-07-24 23:18:14.633475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.633726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.633742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.294 qpair failed and we were unable to recover it. 00:32:42.294 [2024-07-24 23:18:14.633985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.634215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.634230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.294 qpair failed and we were unable to recover it. 00:32:42.294 [2024-07-24 23:18:14.634399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.634705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.634726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.294 qpair failed and we were unable to recover it. 00:32:42.294 [2024-07-24 23:18:14.634992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.635169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.635185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.294 qpair failed and we were unable to recover it. 00:32:42.294 [2024-07-24 23:18:14.635445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.635703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.635723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.294 qpair failed and we were unable to recover it. 00:32:42.294 [2024-07-24 23:18:14.635979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.636224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.636240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.294 qpair failed and we were unable to recover it. 00:32:42.294 [2024-07-24 23:18:14.636591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.636893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.636909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.294 qpair failed and we were unable to recover it. 00:32:42.294 [2024-07-24 23:18:14.637222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.637464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.637480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.294 qpair failed and we were unable to recover it. 00:32:42.294 [2024-07-24 23:18:14.637662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.637957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.637975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.294 qpair failed and we were unable to recover it. 00:32:42.294 [2024-07-24 23:18:14.638209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.638539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.638555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.294 qpair failed and we were unable to recover it. 00:32:42.294 [2024-07-24 23:18:14.638880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.639130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.639146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.294 qpair failed and we were unable to recover it. 00:32:42.294 [2024-07-24 23:18:14.639413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.639706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.639726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.294 qpair failed and we were unable to recover it. 00:32:42.294 [2024-07-24 23:18:14.639985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.640304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.640320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.294 qpair failed and we were unable to recover it. 00:32:42.294 23:18:14 -- nvmf/common.sh@469 -- # nvmfpid=3413991 00:32:42.294 [2024-07-24 23:18:14.640514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.640763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.640781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.294 qpair failed and we were unable to recover it. 00:32:42.294 23:18:14 -- nvmf/common.sh@470 -- # waitforlisten 3413991 00:32:42.294 23:18:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:42.294 [2024-07-24 23:18:14.641022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 23:18:14 -- common/autotest_common.sh@819 -- # '[' -z 3413991 ']' 00:32:42.294 [2024-07-24 23:18:14.641277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.641294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.294 qpair failed and we were unable to recover it. 00:32:42.294 [2024-07-24 23:18:14.641476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 23:18:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:42.294 23:18:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:42.294 [2024-07-24 23:18:14.641796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.641813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.294 qpair failed and we were unable to recover it. 00:32:42.294 [2024-07-24 23:18:14.642045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 23:18:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:42.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:42.294 [2024-07-24 23:18:14.642237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.294 [2024-07-24 23:18:14.642253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.294 qpair failed and we were unable to recover it. 00:32:42.294 23:18:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:42.295 [2024-07-24 23:18:14.642598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 23:18:14 -- common/autotest_common.sh@10 -- # set +x 00:32:42.295 [2024-07-24 23:18:14.642841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.642858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.295 qpair failed and we were unable to recover it. 00:32:42.295 [2024-07-24 23:18:14.643169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.643482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.643498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.295 qpair failed and we were unable to recover it. 00:32:42.295 [2024-07-24 23:18:14.643678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.643952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.643968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.295 qpair failed and we were unable to recover it. 00:32:42.295 [2024-07-24 23:18:14.644161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.644427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.644443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.295 qpair failed and we were unable to recover it. 00:32:42.295 [2024-07-24 23:18:14.644769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.644995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.645011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.295 qpair failed and we were unable to recover it. 00:32:42.295 [2024-07-24 23:18:14.645261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.645519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.645535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.295 qpair failed and we were unable to recover it. 00:32:42.295 [2024-07-24 23:18:14.645785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.646034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.646050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e0000b90 with addr=10.0.0.2, port=4420 00:32:42.295 qpair failed and we were unable to recover it. 00:32:42.295 [2024-07-24 23:18:14.646342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.646605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.646619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.295 qpair failed and we were unable to recover it. 00:32:42.295 [2024-07-24 23:18:14.646866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.647058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.647070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.295 qpair failed and we were unable to recover it. 00:32:42.295 [2024-07-24 23:18:14.647294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.647510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.647522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.295 qpair failed and we were unable to recover it. 00:32:42.295 [2024-07-24 23:18:14.647766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.648009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.648021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.295 qpair failed and we were unable to recover it. 00:32:42.295 [2024-07-24 23:18:14.648206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.648518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.648530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.295 qpair failed and we were unable to recover it. 00:32:42.295 [2024-07-24 23:18:14.648724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.648899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.648911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.295 qpair failed and we were unable to recover it. 00:32:42.295 [2024-07-24 23:18:14.649097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.649268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.649280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.295 qpair failed and we were unable to recover it. 00:32:42.295 [2024-07-24 23:18:14.649447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.649669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.649681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.295 qpair failed and we were unable to recover it. 00:32:42.295 [2024-07-24 23:18:14.650021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.650196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.650207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.295 qpair failed and we were unable to recover it. 00:32:42.295 [2024-07-24 23:18:14.650446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.650614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.650625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.295 qpair failed and we were unable to recover it. 00:32:42.295 [2024-07-24 23:18:14.650940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.651275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.651287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.295 qpair failed and we were unable to recover it. 00:32:42.295 [2024-07-24 23:18:14.651548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.651783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.651795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.295 qpair failed and we were unable to recover it. 00:32:42.295 [2024-07-24 23:18:14.651976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.295 [2024-07-24 23:18:14.652152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.652164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.652453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.652740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.652752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.652973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.653159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.653171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.653335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.653667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.653679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.653926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.654146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.654157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.654488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.654775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.654787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.655027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.655134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.655146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.655508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.655762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.655774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.656005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.656227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.656239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.656415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.656702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.656719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.657020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.657270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.657282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.657462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.657566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.657578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.657736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.657963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.657975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.658063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.658351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.658363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.658607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.658866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.658878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.659168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.659386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.659398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.659635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.659802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.659814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.660053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.660387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.660399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.660563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.660793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.660805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.660959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.661177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.661189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.661447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.661680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.661692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.661939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.662108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.296 [2024-07-24 23:18:14.662119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.296 qpair failed and we were unable to recover it. 00:32:42.296 [2024-07-24 23:18:14.662357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.662594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.662606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.297 qpair failed and we were unable to recover it. 00:32:42.297 [2024-07-24 23:18:14.662839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.663130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.663141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.297 qpair failed and we were unable to recover it. 00:32:42.297 [2024-07-24 23:18:14.663376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.663683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.663694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.297 qpair failed and we were unable to recover it. 00:32:42.297 [2024-07-24 23:18:14.663948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.664178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.664190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.297 qpair failed and we were unable to recover it. 00:32:42.297 [2024-07-24 23:18:14.664426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.664597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.664609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.297 qpair failed and we were unable to recover it. 00:32:42.297 [2024-07-24 23:18:14.664886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.665105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.665117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.297 qpair failed and we were unable to recover it. 00:32:42.297 [2024-07-24 23:18:14.665350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.665654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.665665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.297 qpair failed and we were unable to recover it. 00:32:42.297 [2024-07-24 23:18:14.665892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.666128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.666140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.297 qpair failed and we were unable to recover it. 00:32:42.297 [2024-07-24 23:18:14.666380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.666597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.666609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.297 qpair failed and we were unable to recover it. 00:32:42.297 [2024-07-24 23:18:14.666894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.667147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.667159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.297 qpair failed and we were unable to recover it. 00:32:42.297 [2024-07-24 23:18:14.667477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.667721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.667733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.297 qpair failed and we were unable to recover it. 00:32:42.297 [2024-07-24 23:18:14.668024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.668263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.668275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.297 qpair failed and we were unable to recover it. 00:32:42.297 [2024-07-24 23:18:14.668564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.668802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.668814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.297 qpair failed and we were unable to recover it. 00:32:42.297 [2024-07-24 23:18:14.669039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.669255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.669267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.297 qpair failed and we were unable to recover it. 00:32:42.297 [2024-07-24 23:18:14.669521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.669756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.669768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.297 qpair failed and we were unable to recover it. 00:32:42.297 [2024-07-24 23:18:14.670108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.670413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.670425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.297 qpair failed and we were unable to recover it. 00:32:42.297 [2024-07-24 23:18:14.670757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.671016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.671029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.297 qpair failed and we were unable to recover it. 00:32:42.297 [2024-07-24 23:18:14.671277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.671588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.671599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.297 qpair failed and we were unable to recover it. 00:32:42.297 [2024-07-24 23:18:14.671846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.672100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.672112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.297 qpair failed and we were unable to recover it. 00:32:42.297 [2024-07-24 23:18:14.672416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.672733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.297 [2024-07-24 23:18:14.672745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.297 qpair failed and we were unable to recover it. 00:32:42.298 [2024-07-24 23:18:14.673001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.673301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.673313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.298 qpair failed and we were unable to recover it. 00:32:42.298 [2024-07-24 23:18:14.673620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.673930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.673942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.298 qpair failed and we were unable to recover it. 00:32:42.298 [2024-07-24 23:18:14.674194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.674501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.674513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.298 qpair failed and we were unable to recover it. 00:32:42.298 [2024-07-24 23:18:14.674823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.675083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.675095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.298 qpair failed and we were unable to recover it. 00:32:42.298 [2024-07-24 23:18:14.675397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.675707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.675724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.298 qpair failed and we were unable to recover it. 00:32:42.298 [2024-07-24 23:18:14.676036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.676348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.676360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.298 qpair failed and we were unable to recover it. 00:32:42.298 [2024-07-24 23:18:14.676620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.676868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.676882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.298 qpair failed and we were unable to recover it. 00:32:42.298 [2024-07-24 23:18:14.677173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.677481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.677492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.298 qpair failed and we were unable to recover it. 00:32:42.298 [2024-07-24 23:18:14.677724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.678075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.678087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.298 qpair failed and we were unable to recover it. 00:32:42.298 [2024-07-24 23:18:14.678390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.678555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.678566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.298 qpair failed and we were unable to recover it. 00:32:42.298 [2024-07-24 23:18:14.678823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.679158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.679170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.298 qpair failed and we were unable to recover it. 00:32:42.298 [2024-07-24 23:18:14.679442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.679661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.679673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.298 qpair failed and we were unable to recover it. 00:32:42.298 [2024-07-24 23:18:14.679962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.680195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.680214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.298 qpair failed and we were unable to recover it. 00:32:42.298 [2024-07-24 23:18:14.680517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.680811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.680823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.298 qpair failed and we were unable to recover it. 00:32:42.298 [2024-07-24 23:18:14.681143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.681317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.681328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.298 qpair failed and we were unable to recover it. 00:32:42.298 [2024-07-24 23:18:14.681570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.681876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.681888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.298 qpair failed and we were unable to recover it. 00:32:42.298 [2024-07-24 23:18:14.682104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.682388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.682402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.298 qpair failed and we were unable to recover it. 00:32:42.298 [2024-07-24 23:18:14.682698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.682924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.682937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.298 qpair failed and we were unable to recover it. 00:32:42.298 [2024-07-24 23:18:14.683127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.683357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.683369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.298 qpair failed and we were unable to recover it. 00:32:42.298 [2024-07-24 23:18:14.683609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.683843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.298 [2024-07-24 23:18:14.683856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.298 qpair failed and we were unable to recover it. 00:32:42.298 [2024-07-24 23:18:14.684171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.684477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.684489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.299 [2024-07-24 23:18:14.684778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.685084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.685097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.299 [2024-07-24 23:18:14.685430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.685737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.685751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.299 [2024-07-24 23:18:14.685998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.686307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.686318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.299 [2024-07-24 23:18:14.686588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.686824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.686836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.299 [2024-07-24 23:18:14.687095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.687318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.687331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.299 [2024-07-24 23:18:14.687630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.687960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.687974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.299 [2024-07-24 23:18:14.688232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.688470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.688481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.299 [2024-07-24 23:18:14.688738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.689077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.689089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.299 [2024-07-24 23:18:14.689427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.689447] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:42.299 [2024-07-24 23:18:14.689493] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:42.299 [2024-07-24 23:18:14.689689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.689701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.299 [2024-07-24 23:18:14.689954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.690194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.690206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.299 [2024-07-24 23:18:14.690446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.690670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.690682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.299 [2024-07-24 23:18:14.691004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.691356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.691368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.299 [2024-07-24 23:18:14.691658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.691891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.691903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.299 [2024-07-24 23:18:14.692084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.692322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.692334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.299 [2024-07-24 23:18:14.692557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.692775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.692787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.299 [2024-07-24 23:18:14.693105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.693324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.693336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.299 [2024-07-24 23:18:14.693671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.693981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.693993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.299 [2024-07-24 23:18:14.694174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.694502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.694514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.299 [2024-07-24 23:18:14.694763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.694931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.694942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.299 [2024-07-24 23:18:14.695205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.695489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.695500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.299 [2024-07-24 23:18:14.695815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.696077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.299 [2024-07-24 23:18:14.696089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.299 qpair failed and we were unable to recover it. 00:32:42.300 [2024-07-24 23:18:14.696332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.300 [2024-07-24 23:18:14.696624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.300 [2024-07-24 23:18:14.696636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.300 qpair failed and we were unable to recover it. 00:32:42.300 [2024-07-24 23:18:14.696917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.300 [2024-07-24 23:18:14.697226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.300 [2024-07-24 23:18:14.697238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.300 qpair failed and we were unable to recover it. 00:32:42.300 [2024-07-24 23:18:14.697571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.300 [2024-07-24 23:18:14.697807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.300 [2024-07-24 23:18:14.697820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.300 qpair failed and we were unable to recover it. 00:32:42.300 [2024-07-24 23:18:14.698131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.300 [2024-07-24 23:18:14.698350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.300 [2024-07-24 23:18:14.698362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.300 qpair failed and we were unable to recover it. 00:32:42.300 [2024-07-24 23:18:14.698634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.568 [2024-07-24 23:18:14.698871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.568 [2024-07-24 23:18:14.698883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.568 qpair failed and we were unable to recover it. 00:32:42.568 [2024-07-24 23:18:14.699181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.568 [2024-07-24 23:18:14.699497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.568 [2024-07-24 23:18:14.699509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.568 qpair failed and we were unable to recover it. 00:32:42.568 [2024-07-24 23:18:14.699867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.568 [2024-07-24 23:18:14.700128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.568 [2024-07-24 23:18:14.700140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.568 qpair failed and we were unable to recover it. 00:32:42.568 [2024-07-24 23:18:14.700433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.568 [2024-07-24 23:18:14.700747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.568 [2024-07-24 23:18:14.700759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.568 qpair failed and we were unable to recover it. 00:32:42.568 [2024-07-24 23:18:14.701013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.568 [2024-07-24 23:18:14.701261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.568 [2024-07-24 23:18:14.701272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.568 qpair failed and we were unable to recover it. 00:32:42.568 [2024-07-24 23:18:14.701574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.568 [2024-07-24 23:18:14.701831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.568 [2024-07-24 23:18:14.701843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.568 qpair failed and we were unable to recover it. 00:32:42.568 [2024-07-24 23:18:14.702157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.568 [2024-07-24 23:18:14.702328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.568 [2024-07-24 23:18:14.702340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.568 qpair failed and we were unable to recover it. 00:32:42.568 [2024-07-24 23:18:14.702658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.568 [2024-07-24 23:18:14.702886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.568 [2024-07-24 23:18:14.702898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.568 qpair failed and we were unable to recover it. 00:32:42.568 [2024-07-24 23:18:14.703134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.568 [2024-07-24 23:18:14.703432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.568 [2024-07-24 23:18:14.703444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.568 qpair failed and we were unable to recover it. 00:32:42.568 [2024-07-24 23:18:14.703695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.568 [2024-07-24 23:18:14.704009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.568 [2024-07-24 23:18:14.704021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.568 qpair failed and we were unable to recover it. 00:32:42.568 [2024-07-24 23:18:14.704284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.704526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.704538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.704798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.704981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.704993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.705284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.705618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.705629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.705943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.706257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.706269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.706564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.706789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.706801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.707098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.707338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.707350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.707642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.707875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.707887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.708214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.708503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.708515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.708769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.709032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.709043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.709374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.709680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.709692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.709956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.710192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.710204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.710538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.710798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.710811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.711064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.711372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.711384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.711674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.711990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.712002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.712197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.712525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.712537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.712850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.713140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.713151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.713497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.713788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.713800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.714108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.714329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.714341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.714663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.714976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.714988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.715264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.715606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.715619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.715913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.716104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.716116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.716413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.716677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.716689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.717117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.717403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.717415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.717751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.718019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.718031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.718344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.718677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.718689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.718981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.719213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.719225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.719405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.719717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.719730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.719982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.720280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.720292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.569 qpair failed and we were unable to recover it. 00:32:42.569 [2024-07-24 23:18:14.720538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.569 [2024-07-24 23:18:14.720793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.720805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.721044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.721233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.721245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.721549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.721789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.721802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.722114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.722352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.722364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.722619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.722875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.722887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.723079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.723357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.723369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.723607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.723860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.723873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.724052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.724383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.724394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.724696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.724984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.724996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.725287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.725572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.725584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.725823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.726054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.726066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.726320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.726564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.726576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.726911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.727155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.727168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.727407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.727656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.727667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.727910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.728154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.728166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.728410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 EAL: No free 2048 kB hugepages reported on node 1 00:32:42.570 [2024-07-24 23:18:14.728707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.728728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.728919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.729104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.729116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.729461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.729759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.729772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.730083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.730399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.730413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.730707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.731012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.731023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.731329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.731564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.731577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.731890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.732223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.732236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.732556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.732790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.732803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.732995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.733186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.733198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.733433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.733672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.733684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.733926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.734097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.734109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.734341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.734647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.734658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.734848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.735017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.735029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.570 qpair failed and we were unable to recover it. 00:32:42.570 [2024-07-24 23:18:14.735318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.570 [2024-07-24 23:18:14.735576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.735590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.735891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.736201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.736215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.736481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.736766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.736778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.737020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.737305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.737318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.737563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.737831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.737843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.738091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.738379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.738391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.738702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.739001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.739013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.739303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.739621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.739632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.739952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.740249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.740261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.740512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.740780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.740792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.741043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.741294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.741306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.741644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.741808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.741820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.742017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.742340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.742352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.742662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.742894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.742907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.743195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.743492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.743505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.743819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.744108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.744121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.744436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.744676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.744688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.744979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.745264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.745277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.745563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.745802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.745815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.746006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.746246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.746258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.746568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.746914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.746926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.747239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.747568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.747580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.747829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.748068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.748080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.748300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.748548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.748559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.748792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.748979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.748991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.749182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.749409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.749420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.749676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.749917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.749928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.750243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.750529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.750541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.571 qpair failed and we were unable to recover it. 00:32:42.571 [2024-07-24 23:18:14.750805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.751136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.571 [2024-07-24 23:18:14.751149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.751439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.751747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.751759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.752062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.752290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.752301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.752614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.752902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.752914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.753236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.753499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.753511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.753858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.754099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.754111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.754403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.754619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.754634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.754948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.755247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.755259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.755573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.755880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.755892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.756159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.756459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.756471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.756781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.757009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.757021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.757315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.757531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.757542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.757854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.758191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.758203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.758426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.758610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.758622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.758934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.759272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.759284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.759635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.759930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.759942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.760183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.760460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.760473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.760650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.760898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.760910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.761138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.761471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.761483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.761800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.762052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.762064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.762373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.762593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.762604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.762918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.763210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.763221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.763471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.763778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.763790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.764101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.764332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.764344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.764581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.764813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.764826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.765048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.765290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.765302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.765528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.765764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.765777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.766020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.766255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.766267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.766582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.766839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.572 [2024-07-24 23:18:14.766851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.572 qpair failed and we were unable to recover it. 00:32:42.572 [2024-07-24 23:18:14.767164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.767450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.767461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.767712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.768030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.768041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.768353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.768638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.768650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.768884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.769220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.769232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.769545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.769748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.769760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.770066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.770373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.770385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.770698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.770955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.770968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.771175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.771482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.771496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.771773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.771994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.772005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.772242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.772554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.772566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.772810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.773122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.773134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.773472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.773739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.773751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.774055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.774364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.774375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.774621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.774804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.774816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.775006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.775283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.775295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.775462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.775705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.775721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.776034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.776266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.776278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.776496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.776783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.776795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.777106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.777408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.777420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.777682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.778012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.778024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.778249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.778578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.778590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.778918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.779096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.779108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.779330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.779571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.779583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.573 qpair failed and we were unable to recover it. 00:32:42.573 [2024-07-24 23:18:14.779821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.780129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.573 [2024-07-24 23:18:14.780141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.780328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.780555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.780567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.780857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.781074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.781087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.781260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.781593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.781605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.781844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.782043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.782056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.782239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.782458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.782469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.782773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.783075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.783087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.783350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:42.574 [2024-07-24 23:18:14.783415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.783662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.783674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.783994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.784229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.784241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.784566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.784804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.784817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.785083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.785396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.785408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.785646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.785919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.785933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.786183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.786426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.786438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.786748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.787084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.787097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.787290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.787531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.787546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.787855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.788120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.788132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.788431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.788743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.788756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.789053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.789368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.789381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.789626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.789793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.789806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.790055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.790301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.790314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.790624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.790900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.790912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.791174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.791414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.791427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.791768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.792005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.792017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.792257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.792517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.792529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.792686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.792906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.792922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.793160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.793419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.793432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.793758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.794043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.794056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.794354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.794669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.794681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.574 qpair failed and we were unable to recover it. 00:32:42.574 [2024-07-24 23:18:14.795056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.795354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.574 [2024-07-24 23:18:14.795366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.795688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.795911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.795923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.796167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.796457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.796470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.796782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.797036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.797048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.797376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.797718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.797730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.797906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.798125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.798137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.798369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.798680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.798692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.798948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.799106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.799118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.799357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.799643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.799656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.799907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.800214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.800228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.800523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.800818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.800830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.801072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.801313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.801325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.801574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.801811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.801825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.802114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.802279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.802291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.802664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.802979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.802993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.803269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.803437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.803450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.803765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.803961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.803974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.804254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.804503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.804517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.804827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.805116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.805128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.805311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.805541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.805554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.805738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.805966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.805979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.806239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.806483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.806496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.806788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.807008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.807020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.807253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.807518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.807531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.807864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.808128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.808141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.808483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.808810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.808823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.809066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.809298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.809311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.809567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.809878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.809892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.575 [2024-07-24 23:18:14.810161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.810353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.575 [2024-07-24 23:18:14.810366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.575 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.810626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.810886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.810901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.811212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.811446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.811459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.811779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.811953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.811965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.812205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.812440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.812452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.812710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.812903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.812915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.813228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.813544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.813556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.813745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.814018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.814030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.814186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.814401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.814413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.814753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.814979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.814991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.815224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.815384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.815396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.815628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.815841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.815853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.816040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.816259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.816271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.816551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.816804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.816817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.817058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.817346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.817358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.817602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.817838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.817852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.818075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.818311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.818323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.818658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.818967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.818980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.819276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.819570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.819582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.819806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.820042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.820055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.820235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.820605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.820617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.820807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.821048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.821061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.821319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.821571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.821566] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:42.576 [2024-07-24 23:18:14.821583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.821671] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:42.576 [2024-07-24 23:18:14.821682] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:42.576 [2024-07-24 23:18:14.821691] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:42.576 [2024-07-24 23:18:14.821813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:32:42.576 [2024-07-24 23:18:14.821919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.821922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:32:42.576 [2024-07-24 23:18:14.821959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:32:42.576 [2024-07-24 23:18:14.822096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.822108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.821961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:32:42.576 [2024-07-24 23:18:14.822369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.822633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.822645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.822935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.823111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.823123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.823389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.823638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.823650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.576 qpair failed and we were unable to recover it. 00:32:42.576 [2024-07-24 23:18:14.823920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.576 [2024-07-24 23:18:14.824154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.824166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.824405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.824637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.824650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.824917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.825128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.825139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.825358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.825675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.825687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.825980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.826178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.826189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.826433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.826680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.826692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.826955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.827193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.827206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.827561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.827813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.827826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.828057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.828295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.828307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.828653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.828875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.828888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.829146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.829387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.829400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.829722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.830038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.830050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.830241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.830573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.830586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.830806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.831118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.831130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.831365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.831540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.831552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.831894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.832072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.832084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.832387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.832644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.832657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.832927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.833170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.833183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.833382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.833699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.833712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.833969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.834234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.834247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.834496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.834829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.834843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.835041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.835233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.835246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.835492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.835800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.835814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.836057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.836274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.836287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.577 qpair failed and we were unable to recover it. 00:32:42.577 [2024-07-24 23:18:14.836631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.577 [2024-07-24 23:18:14.836932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.836948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.837187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.837384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.837396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.837617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.837864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.837877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.838115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.838382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.838395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.838685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.839005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.839018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.839196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.839542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.839557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.839803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.840063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.840076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.840232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.840510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.840523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.840834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.841076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.841089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.841357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.841639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.841652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.841907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.842168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.842180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.842498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.842737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.842751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.842939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.843232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.843244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.843593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.843831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.843845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.844092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.844326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.844339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.844510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.844681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.844695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.845004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.845294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.845308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.845569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.845887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.845900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.846157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.846462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.846475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.846708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.846908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.846920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.847140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.847390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.847402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.847724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.848010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.848024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.848215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.848545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.848558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.848800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.849121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.849134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.849370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.849689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.849702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.849946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.850196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.850209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.850379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.850600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.850613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.850900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.851207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.851220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.578 qpair failed and we were unable to recover it. 00:32:42.578 [2024-07-24 23:18:14.851525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.578 [2024-07-24 23:18:14.851764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.851777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.852012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.852296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.852309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.852625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.852801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.852813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.853037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.853275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.853287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.853645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.853885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.853897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.854214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.854484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.854496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.854816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.855059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.855071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.855327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.855563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.855575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.855872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.856174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.856187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.856410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.856700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.856713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.856881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.857219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.857233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.857553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.857805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.857817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.858086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.858349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.858362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.858612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.858920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.858932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.859108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.859395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.859407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.859649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.859886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.859898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.860195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.860453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.860464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.860755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.860933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.860944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.861275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.861586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.861598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.861849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.862146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.862158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.862379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.862678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.862690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.862942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.863158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.863171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.863513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.863828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.863841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.864158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.864467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.864480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.864726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.864989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.865002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.865260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.865483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.865496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.865818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.866053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.866065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.866362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.866698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.866710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.579 qpair failed and we were unable to recover it. 00:32:42.579 [2024-07-24 23:18:14.867058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.867301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.579 [2024-07-24 23:18:14.867313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.867665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.867937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.867952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.868125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.868304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.868317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.868565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.868867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.868882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.869117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.869355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.869367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.869709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.869967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.869980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.870243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.870543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.870556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.870802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.871119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.871132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.871377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.871611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.871623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.871866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.872030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.872043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.872333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.872575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.872592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.872883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.873121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.873133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.873453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.873702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.873718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.873987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.874179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.874191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.874430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.874651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.874663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.874909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.875077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.875089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.875262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.875526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.875539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.875805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.875999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.876011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.876254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.876500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.876513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.876775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.877064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.877077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.877309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.877544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.877558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.877803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.878029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.878041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.878264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.878447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.878459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.878713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.878964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.878975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.879291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.879544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.879556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.879868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.880192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.880204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.880512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.880746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.880758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.880985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.881152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.881164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.881409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.881697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.580 [2024-07-24 23:18:14.881708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.580 qpair failed and we were unable to recover it. 00:32:42.580 [2024-07-24 23:18:14.881981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.882222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.882235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.882478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.882652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.882666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.882920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.883170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.883181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.883415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.883733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.883745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.883980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.884220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.884232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.884564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.884806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.884819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.885069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.885398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.885409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.885652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.885966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.885978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.886174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.886499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.886511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.886752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.886903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.886915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.887205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.887551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.887562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.887894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.888204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.888219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.888492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.888785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.888797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.889085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.889315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.889327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.889649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.889834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.889846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.890111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.890290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.890301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.890485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.890702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.890718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.890905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.891099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.891111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.891332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.891679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.891691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.891883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.892072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.892084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.892264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.892599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.892610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.892845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.893136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.893148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.893369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.893606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.893618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.893898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.894129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.894140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.894472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.894730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.894742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.895005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.895290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.895302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.895531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.895764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.895776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.895931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.896196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.896207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.581 qpair failed and we were unable to recover it. 00:32:42.581 [2024-07-24 23:18:14.896448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.581 [2024-07-24 23:18:14.896677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.896689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.896952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.897214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.897227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.897396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.897568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.897579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.897914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.898213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.898224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.898467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.898762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.898774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.899020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.899234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.899245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.899418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.899603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.899615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.899840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.900151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.900162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.900511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.900762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.900774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.901015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.901248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.901260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.901518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.901827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.901839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.902106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.902289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.902301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.902628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.902921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.902933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.903120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.903271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.903282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.903481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.903791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.903802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.904080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.904314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.904326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.904634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.904936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.904948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.905203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.905397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.905408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.905649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.905945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.905958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.906198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.906466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.906478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.906785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.907096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.907108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.907398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.907656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.907668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.907938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.908176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.908189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.908522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.908705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.908719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.909024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.909339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.909350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.582 qpair failed and we were unable to recover it. 00:32:42.582 [2024-07-24 23:18:14.909665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.582 [2024-07-24 23:18:14.909887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.909899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.910130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.910436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.910449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.910687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.910896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.910908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.911227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.911543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.911555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.911822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.912010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.912022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.912262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.912533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.912545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.912789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.912977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.912989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.913218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.913466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.913477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.913745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.914053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.914066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.914359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.914593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.914605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.914843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.915071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.915084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.915342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.915669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.915681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.915906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.916087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.916098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.916283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.916513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.916524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.916811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.917045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.917056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.917287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.917624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.917636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.917912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.918201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.918212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.918399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.918758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.918770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.919058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.919240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.919252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.919593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.919838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.919851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.920035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.920319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.920331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.920590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.920923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.920935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.921171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.921351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.921362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.921622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.921917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.921929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.922193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.922495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.922508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.922765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.922937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.922949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.923181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.923535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.923547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.923834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.924094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.924106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.583 qpair failed and we were unable to recover it. 00:32:42.583 [2024-07-24 23:18:14.924348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.924504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.583 [2024-07-24 23:18:14.924515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.924749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.925026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.925038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.925292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.925595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.925607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.925848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.926176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.926189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.926501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.926812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.926824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.927067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.927248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.927260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.927629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.927852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.927864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.928043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.928329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.928341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.928559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.928874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.928886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.929226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.929544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.929556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.929887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.930137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.930149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.930393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.930683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.930694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.930971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.931157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.931169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.931427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.931736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.931748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.931952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.932236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.932248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.932656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.932960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.932973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.933215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.933477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.933489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.933733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.933931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.933943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.934123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.934310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.934322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.934573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.934820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.934832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.935069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.935258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.935270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.935627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.935923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.935936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.936177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.936368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.936381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.936622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.936865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.936877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.937168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.937359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.937371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.937701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.937960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.937973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.938218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.938478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.938490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.938736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.938981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.938994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.939167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.939351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.584 [2024-07-24 23:18:14.939363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.584 qpair failed and we were unable to recover it. 00:32:42.584 [2024-07-24 23:18:14.939553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.939775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.939788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.939968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.940163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.940175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.940490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.940810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.940822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.941082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.941277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.941289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.941641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.941883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.941895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.942135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.942440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.942451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.942765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.943013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.943025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.943301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.943597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.943609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.943938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.944126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.944138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.944488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.944717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.944729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.944935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.945242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.945254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.945489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.945804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.945817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.946052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.946248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.946260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.946519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.946805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.946817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.946987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.947171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.947184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.947454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.947632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.947644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.947880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.948124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.948137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.948393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.948702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.948721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.948936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.949224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.949236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.949421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.949721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.949733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.950050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.950290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.950302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.950542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.950766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.950779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.951065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.951244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.951258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.951500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.951816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.951828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.951946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.952182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.952194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.952365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.952490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.952502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.952774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.952938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.952949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.953182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.953489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.953501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.585 qpair failed and we were unable to recover it. 00:32:42.585 [2024-07-24 23:18:14.953770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.585 [2024-07-24 23:18:14.954009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.954021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.954337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.954573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.954585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.954878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.955038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.955050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.955292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.955571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.955583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.955917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.956160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.956174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.956435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.956694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.956706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.956945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.957204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.957216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.957465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.957713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.957730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.957994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.958235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.958247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.958498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.958719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.958731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.958922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.959074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.959086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.959324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.959565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.959577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.959850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.960079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.960092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.960382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.960612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.960624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.960953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.961211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.961225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.961584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.961824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.961836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.962126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.962297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.962309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.962584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.962920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.962932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.963201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.963441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.963453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.963705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.963872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.963885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.964115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.964347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.964358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.964591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.964880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.964893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.965083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.965319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.965331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.965520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.965759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.965772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.965975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.966162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.966177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.966350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.966525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.966537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.966766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.966973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.966986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.967223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.967374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.967386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.586 qpair failed and we were unable to recover it. 00:32:42.586 [2024-07-24 23:18:14.967609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.586 [2024-07-24 23:18:14.967944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.967957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.968137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.968299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.968311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.968601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.968824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.968836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.969018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.969184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.969196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.969368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.969518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.969530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.969753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.970007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.970020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.970251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.970469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.970482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.970831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.971009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.971021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.971247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.971427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.971439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.971666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.971903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.971915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.972073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.972226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.972237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.972466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.972626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.972638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.972889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.973058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.973070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.973301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.973481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.973493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.973669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.973921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.973933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.974150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.974436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.974449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.974619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.974907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.974919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.975235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.975416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.975428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.975692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.975930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.975942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.976254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.976353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.976365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.976585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.976796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.976808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.976972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.977259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.977271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.977501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.977649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.977661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.977836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.978026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.978038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.978334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.978554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.587 [2024-07-24 23:18:14.978566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.587 qpair failed and we were unable to recover it. 00:32:42.587 [2024-07-24 23:18:14.978796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.978954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.978965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.588 qpair failed and we were unable to recover it. 00:32:42.588 [2024-07-24 23:18:14.979139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.979363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.979375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.588 qpair failed and we were unable to recover it. 00:32:42.588 [2024-07-24 23:18:14.979613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.979848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.979860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.588 qpair failed and we were unable to recover it. 00:32:42.588 [2024-07-24 23:18:14.980153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.980337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.980349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.588 qpair failed and we were unable to recover it. 00:32:42.588 [2024-07-24 23:18:14.980581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.980819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.980831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.588 qpair failed and we were unable to recover it. 00:32:42.588 [2024-07-24 23:18:14.981075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.981300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.981311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.588 qpair failed and we were unable to recover it. 00:32:42.588 [2024-07-24 23:18:14.981548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.981791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.981803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.588 qpair failed and we were unable to recover it. 00:32:42.588 [2024-07-24 23:18:14.982038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.982192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.982203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.588 qpair failed and we were unable to recover it. 00:32:42.588 [2024-07-24 23:18:14.982368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.982528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.982540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.588 qpair failed and we were unable to recover it. 00:32:42.588 [2024-07-24 23:18:14.982707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.982862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.982874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.588 qpair failed and we were unable to recover it. 00:32:42.588 [2024-07-24 23:18:14.983101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.983254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.983265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.588 qpair failed and we were unable to recover it. 00:32:42.588 [2024-07-24 23:18:14.983437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.983668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.983680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.588 qpair failed and we were unable to recover it. 00:32:42.588 [2024-07-24 23:18:14.983852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.984074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.984086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.588 qpair failed and we were unable to recover it. 00:32:42.588 [2024-07-24 23:18:14.984256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.984350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.984362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.588 qpair failed and we were unable to recover it. 00:32:42.588 [2024-07-24 23:18:14.984687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.984865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.984877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.588 qpair failed and we were unable to recover it. 00:32:42.588 [2024-07-24 23:18:14.985167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.985416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.985428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.588 qpair failed and we were unable to recover it. 00:32:42.588 [2024-07-24 23:18:14.985548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.985798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.985810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.588 qpair failed and we were unable to recover it. 00:32:42.588 [2024-07-24 23:18:14.985972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.986188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.986199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.588 qpair failed and we were unable to recover it. 00:32:42.588 [2024-07-24 23:18:14.986356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.986526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.986538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.588 qpair failed and we were unable to recover it. 00:32:42.588 [2024-07-24 23:18:14.986780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.987001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.987012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.588 qpair failed and we were unable to recover it. 00:32:42.588 [2024-07-24 23:18:14.987181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.987415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.987426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.588 qpair failed and we were unable to recover it. 00:32:42.588 [2024-07-24 23:18:14.987657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.588 [2024-07-24 23:18:14.987990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.988003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.857 qpair failed and we were unable to recover it. 00:32:42.857 [2024-07-24 23:18:14.988175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.988396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.988408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.857 qpair failed and we were unable to recover it. 00:32:42.857 [2024-07-24 23:18:14.988582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.988728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.988740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.857 qpair failed and we were unable to recover it. 00:32:42.857 [2024-07-24 23:18:14.988920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.989065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.989076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.857 qpair failed and we were unable to recover it. 00:32:42.857 [2024-07-24 23:18:14.989257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.989438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.989450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.857 qpair failed and we were unable to recover it. 00:32:42.857 [2024-07-24 23:18:14.989674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.989863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.989885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.857 qpair failed and we were unable to recover it. 00:32:42.857 [2024-07-24 23:18:14.990102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.990335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.990346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.857 qpair failed and we were unable to recover it. 00:32:42.857 [2024-07-24 23:18:14.990502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.990721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.990733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.857 qpair failed and we were unable to recover it. 00:32:42.857 [2024-07-24 23:18:14.990969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.991140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.991152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.857 qpair failed and we were unable to recover it. 00:32:42.857 [2024-07-24 23:18:14.991371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.991649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.991661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.857 qpair failed and we were unable to recover it. 00:32:42.857 [2024-07-24 23:18:14.991897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.992060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.992072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.857 qpair failed and we were unable to recover it. 00:32:42.857 [2024-07-24 23:18:14.992241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.992458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.992470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.857 qpair failed and we were unable to recover it. 00:32:42.857 [2024-07-24 23:18:14.992729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.992965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.992977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.857 qpair failed and we were unable to recover it. 00:32:42.857 [2024-07-24 23:18:14.993151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.993380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.993391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.857 qpair failed and we were unable to recover it. 00:32:42.857 [2024-07-24 23:18:14.993550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.993702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.857 [2024-07-24 23:18:14.993724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.857 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:14.993882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.994138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.994149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:14.994302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.994606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.994618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:14.994842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.995064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.995076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:14.995250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.995496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.995507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:14.995731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.995980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.995992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:14.996279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.996515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.996527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:14.996808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.997050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.997061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:14.997251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.997498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.997510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:14.997767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.998005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.998018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:14.998212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.998481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.998492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:14.998724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.998896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.998909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:14.999082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.999265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.999278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:14.999659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.999900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:14.999912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:15.000234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.000578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.000590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:15.000940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.001226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.001237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:15.001426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.001696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.001709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:15.001962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.002200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.002212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:15.002422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.002739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.002751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:15.002991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.003256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.003268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:15.003586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.003883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.003895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:15.004151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.004393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.004405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:15.004625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.004928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.004940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:15.005125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.005308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.005319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:15.005557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.005864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.005876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:15.006144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.006461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.006473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:15.006709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.006932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.006944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:15.007182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.007510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.007522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:15.007761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.008073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.858 [2024-07-24 23:18:15.008085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.858 qpair failed and we were unable to recover it. 00:32:42.858 [2024-07-24 23:18:15.008328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.008557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.008569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.008810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.009073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.009084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.009323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.009607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.009619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.009951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.010258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.010269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.010589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.010885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.010897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.011235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.011440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.011451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.011739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.011904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.011916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.012104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.012284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.012296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.012625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.012798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.012810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.013156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.013463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.013475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.013740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.014054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.014066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.014369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.014675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.014686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.014964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.015232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.015244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.015542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.015780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.015792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.016022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.016261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.016272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.016518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.016697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.016709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.016980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.017246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.017258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.017616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.017919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.017931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.018240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.018566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.018578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.018888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.019175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.019186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.019477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.019734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.019746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.019978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.020232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.020243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.020511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.020823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.020835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.021072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.021387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.021399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.021705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.021959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.021971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.022214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.022537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.022549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.022874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.023115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.023126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.023417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.023632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.023643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.859 qpair failed and we were unable to recover it. 00:32:42.859 [2024-07-24 23:18:15.023932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.024242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.859 [2024-07-24 23:18:15.024254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.024508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.024835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.024847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.025089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.025397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.025409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.025656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.025969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.025981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.026308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.026531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.026544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.026772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.027011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.027024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.027287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.027602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.027615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.027868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.028062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.028074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.028313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.028573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.028585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.028883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.029067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.029079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.029253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.029560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.029574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.029885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.030149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.030162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.030466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.030699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.030711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.030913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.031198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.031209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.031498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.031675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.031686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.031928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.032115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.032127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.032411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.032749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.032761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.032984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.033297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.033309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.033610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.033830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.033842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.034139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.034334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.034346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.034592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.034852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.034866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.035085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.035385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.035396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.035694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.035862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.035874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.036116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.036348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.036359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.036593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.036903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.036917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.037241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.037550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.037562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.037875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.038093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.038105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.038347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.038633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.038645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.038958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.039207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.860 [2024-07-24 23:18:15.039218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.860 qpair failed and we were unable to recover it. 00:32:42.860 [2024-07-24 23:18:15.039447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.039709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.039723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.039891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.040180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.040196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.040437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.040664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.040676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.040947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.041111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.041123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.041435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.041675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.041687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.042027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.042335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.042347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.042596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.042766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.042778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.042953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.043240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.043252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.043574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.043798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.043810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.044074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.044319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.044331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.044500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.044809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.044822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.045175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.045412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.045426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.045717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.045982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.045994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.046233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.046471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.046483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.046770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.046949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.046961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.047193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.047426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.047438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.047680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.047966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.047978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.048229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.048464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.048476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.048634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.048872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.048884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.049149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.049449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.049460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.049720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.049959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.049971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.050196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.050443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.050455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.050635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.050889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.050901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.051066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.051392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.051404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.051722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.051976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.051988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.052289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.052573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.052585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.052815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.053047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.053059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.861 qpair failed and we were unable to recover it. 00:32:42.861 [2024-07-24 23:18:15.053293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.053604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.861 [2024-07-24 23:18:15.053615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.053941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.054252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.054263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.054643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.054960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.054972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.055271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.055546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.055558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.055878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.056179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.056191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.056504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.056736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.056749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.057043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.057328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.057340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.057630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.057897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.057909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.058207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.058430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.058442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.058675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.058964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.058977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.059234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.059416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.059428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.059679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.059976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.059988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.060315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.060646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.060658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.060894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.061087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.061099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.061367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.061654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.061667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.061981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.062286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.062297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.062622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.062955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.062967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.063202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.063496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.063507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.063818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.064055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.064067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.064379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.064713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.064729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.065042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.065274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.065285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.065584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.065870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.065883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.066052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.066272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.066284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.066558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.066883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.066895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.067198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.067509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.862 [2024-07-24 23:18:15.067521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.862 qpair failed and we were unable to recover it. 00:32:42.862 [2024-07-24 23:18:15.067834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.068014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.068027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.068268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.068604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.068616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.068936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.069113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.069125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.069384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.069674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.069686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.069938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.070231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.070243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.070538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.070824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.070836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.071144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.071384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.071395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.071621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.071914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.071926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.072242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.072520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.072531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.072869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.073037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.073049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.073311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.073477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.073488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.073783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.074060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.074072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.074380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.074619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.074631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.074860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.075197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.075208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.075399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.075576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.075588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.075904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.076167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.076179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.076350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.076637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.076648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.076888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.077128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.077139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.077378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.077565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.077577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.077866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.078125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.078137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.078450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.078757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.078769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.078947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.079121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.079132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.079379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.079670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.079683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.079995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.080170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.080181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.080421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.080713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.080727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.080996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.081218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.081230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.081406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.081635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.081647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.081887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.082122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.082134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.863 qpair failed and we were unable to recover it. 00:32:42.863 [2024-07-24 23:18:15.082451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.082639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.863 [2024-07-24 23:18:15.082651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.082927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.083170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.083181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.083452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.083766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.083778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.084027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.084316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.084327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.084651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.084889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.084902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.085215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.085479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.085491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.085719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.085887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.085899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.086196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.086435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.086446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.086691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.086955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.086968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.087281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.087544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.087555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.087800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.088087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.088099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.088283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.088540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.088552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.088861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.089112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.089124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.089414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.089724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.089736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.090047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.090221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.090233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.090565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.090786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.090799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.091033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.091202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.091214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.091457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.091766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.091778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.091958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.092211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.092223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.092565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.092860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.092872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.093192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.093360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.093372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.093735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.094014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.094026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.094243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.094538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.094550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.094867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.095176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.095188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.095422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.095729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.095741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.095986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.096223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.096234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.096555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.096817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.096829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.097052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.097339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.097351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.864 qpair failed and we were unable to recover it. 00:32:42.864 [2024-07-24 23:18:15.097657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.864 [2024-07-24 23:18:15.097903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.097915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.098082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.098377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.098388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.098571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.098900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.098912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.099099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.099266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.099277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.099511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.099863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.099876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.100055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.100340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.100351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.100611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.100905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.100917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.101140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.101357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.101369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.101660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.101936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.101948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.102267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.102568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.102580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.102894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.103134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.103146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.103442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.103750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.103763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.103950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.104184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.104196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.104538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.104811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.104823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.105066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.105302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.105314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.105578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.105887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.105900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.106135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.106352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.106364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.106679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.106915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.106927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.107215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.107468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.107479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.107774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.108080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.108092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.108384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.108698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.108709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.108973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.109194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.109206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.109455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.109761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.109773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.110089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.110435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.110446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.110745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.110988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.111000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.111243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.111513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.111525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.111832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.112142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.112154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.112428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.112644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.112656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.112904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.113087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.865 [2024-07-24 23:18:15.113099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.865 qpair failed and we were unable to recover it. 00:32:42.865 [2024-07-24 23:18:15.113346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.113651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.113663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.113904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.114192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.114203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.114449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.114682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.114693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.115002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.115268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.115280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.115575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.115865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.115877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.116112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.116427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.116441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.116634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.116947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.116960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.117227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.117569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.117581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.117954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.118142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.118155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.118473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.118781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.118793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.119105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.119266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.119278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.119537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.119876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.119888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.120124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.120360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.120372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.120708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.120979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.120991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.121170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.121456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.121468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.121698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.121910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.121926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.122119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.122432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.122444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.122663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.122825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.122838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.123025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.123264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.123276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.123529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.123707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.123723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.124011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.124191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.124203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.124535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.124842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.124855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.125083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.125324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.125336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.125518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.125804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.125817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.866 qpair failed and we were unable to recover it. 00:32:42.866 [2024-07-24 23:18:15.126060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.126298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.866 [2024-07-24 23:18:15.126310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.126667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.126942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.126956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.127200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.127459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.127471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.127785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.128043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.128055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.128295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.128612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.128624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.128924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.129234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.129245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.129581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.129886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.129899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.130210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.130457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.130468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.130782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.131055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.131067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.131324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.131613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.131624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.131918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.132212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.132224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.132413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.132592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.132606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.132842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.133153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.133164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.133459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.133773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.133785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.134028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.134354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.134366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.134612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.134880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.134893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.135133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.135402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.135414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.135639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.135931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.135943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.136258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.136495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.136507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.136681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.136966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.136979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.137292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.137467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.137479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.137748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.138063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.138074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.138387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.138692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.138703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.139039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.139298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.139310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.139629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.139918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.139931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.140193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.140446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.140458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.140773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.140990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.141002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.141242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.141467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.141479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.141722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.142054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.867 [2024-07-24 23:18:15.142066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.867 qpair failed and we were unable to recover it. 00:32:42.867 [2024-07-24 23:18:15.142310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.142620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.142632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.142932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.143173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.143185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.143374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.143666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.143678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.143997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.144286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.144298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.144566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.144878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.144891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.145225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.145476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.145488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.145660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.145902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.145914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.146254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.146631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.146643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.146961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.147185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.147198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.147497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.147750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.147762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.147986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.148225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.148236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.148486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.148818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.148830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.149065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.149242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.149254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.149494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.149781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.149793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.150039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.150360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.150372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.150673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.150913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.150925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.151215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.151533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.151546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.151864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.152115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.152127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.152382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.152637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.152649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.152925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.153163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.153175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.153338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.153594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.153606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.153773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.154080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.154092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.154398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.154652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.154664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.154985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.155253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.155265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.155569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.155761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.155773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.156066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.156396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.156408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.156697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.156945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.156957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.157260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.157588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.157599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.868 qpair failed and we were unable to recover it. 00:32:42.868 [2024-07-24 23:18:15.157785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.868 [2024-07-24 23:18:15.158095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.158106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.158393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.158662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.158674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.159003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.159291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.159303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.159630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.159936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.159948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.160168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.160338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.160350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.160654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.160894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.160907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.161206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.161446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.161458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.161742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.161986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.161998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.162294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.162630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.162641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.162864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.163101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.163112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.163357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.163685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.163696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.163974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.164296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.164307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.164634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.164866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.164878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.165119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.165357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.165369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.165704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.165888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.165900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.166079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.166320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.166331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.166591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.166879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.166891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.167143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.167442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.167454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.167779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.168064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.168077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.168385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.168710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.168725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.169054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.169305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.169317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.169644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.169879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.169891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.170204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.170489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.170501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.170759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.171012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.171023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.171291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.171566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.171578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.171869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.172107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.172118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.172362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.172593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.172605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.172837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.173059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.173071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.869 [2024-07-24 23:18:15.173258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.173601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.869 [2024-07-24 23:18:15.173613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.869 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.173876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.174116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.174127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.174375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.174553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.174565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.174829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.175143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.175156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.175397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.175682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.175694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.175955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.176131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.176143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.176482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.176791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.176803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.177106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.177384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.177395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.177644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.177916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.177929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.178244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.178489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.178501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.178767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.179066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.179078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.179250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.179447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.179459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.179777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.179995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.180006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.180344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.180518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.180529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.180836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.181102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.181114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.181452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.181825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.181837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.182060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.182249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.182260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.182434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.182604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.182617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.182839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.183144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.183155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.183345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.183633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.183645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.183965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.184197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.184208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.184549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.184855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.184867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.185109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.185288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.185300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.185566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.185854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.185867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.186211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.186550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.186562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.186849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.187136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.187147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.187391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.187672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.187684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.187962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.188251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.188263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.188579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.188835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.870 [2024-07-24 23:18:15.188847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.870 qpair failed and we were unable to recover it. 00:32:42.870 [2024-07-24 23:18:15.189080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.189312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.189323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.189652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.189942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.189954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.190141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.190371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.190383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.190620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.190919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.190931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.191192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.191432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.191444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.191760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.191998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.192009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.192322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.192550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.192561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.192830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.193056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.193068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.193383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.193694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.193706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.194048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.194347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.194359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.194701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.194936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.194948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.195237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.195418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.195429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.195692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.195997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.196009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.196202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.196536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.196548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.196780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.196968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.196980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.197155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.197417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.197429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.197669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.197985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.197997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.198243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.198565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.198576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.198975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.199191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.199205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.199470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.199691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.199703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.200016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.200357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.200369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.200635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.200945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.200957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.201246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.201505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.201517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.201832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.202141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.871 [2024-07-24 23:18:15.202153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.871 qpair failed and we were unable to recover it. 00:32:42.871 [2024-07-24 23:18:15.202418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.202716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.202729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.203008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.203313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.203324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.203613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.203789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.203801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.204041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.204281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.204293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.204596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.204832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.204847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.205141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.205438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.205450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.205684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.205980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.205992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.206329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.206598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.206610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.206904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.207211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.207223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.207555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.207863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.207875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.208131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.208437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.208448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.208741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.209008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.209020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.209263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.209596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.209609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.209915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.210222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.210234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.210505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.210738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.210752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.210987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.211224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.211236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.211479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.211725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.211737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.211963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.212235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.212246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.212552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.212804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.212816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.213052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.213362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.213374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.213698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.213925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.213937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.214181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.214510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.214522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.214843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.215128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.215140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.215388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.215666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.215678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.215981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.216294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.216308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.216561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.216867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.216880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.217113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.217343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.217355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.217693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.217926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.217938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.872 [2024-07-24 23:18:15.218204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.218443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.872 [2024-07-24 23:18:15.218455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.872 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.218771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.219079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.219090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.219380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.219675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.219687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.220006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.220298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.220311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.220538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.220843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.220855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.221169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.221399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.221411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.221699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.221924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.221936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.222195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.222362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.222373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.222612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.222851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.222863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.223109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.223363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.223375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.223604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.223868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.223880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.224193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.224496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.224508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.224862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.225147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.225160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.225351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.225661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.225673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.225905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.226206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.226218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.226407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.226694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.226705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.226896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.227081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.227093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.227438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.227675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.227687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.227989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.228295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.228307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.228460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.228744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.228757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.229065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.229389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.229401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.229631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.229868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.229881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.230058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.230316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.230328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.230644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.230835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.230847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.231085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.231349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.231361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.231589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.231932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.231945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.232183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.232377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.232388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.232630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.232872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.232884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.233197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.233433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.873 [2024-07-24 23:18:15.233444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.873 qpair failed and we were unable to recover it. 00:32:42.873 [2024-07-24 23:18:15.233765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.234027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.234039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.234236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.234417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.234429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.234648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.234883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.234895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.235209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.235512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.235523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.235837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.236165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.236177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.236415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.236702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.236718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.236991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.237169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.237181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.237513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.237763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.237775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.237972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.238283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.238294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.238615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.238928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.238940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.239219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.239468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.239480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.239712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.239932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.239944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.240236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.240558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.240570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.240801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.241088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.241100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.241392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.241680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.241693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.241997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.242333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.242345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.242525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.242743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.242755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.243021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.243204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.243216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.243393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.243628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.243639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.243929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.244162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.244175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.244407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.244738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.244750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.245081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.245342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.245354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.245593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.245839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.245851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.246090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.246401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.246413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.246725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.247030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.247041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.247339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.247504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.247516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.247740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.248025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.248037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.248327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.248648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.248659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.874 qpair failed and we were unable to recover it. 00:32:42.874 [2024-07-24 23:18:15.248933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.249171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.874 [2024-07-24 23:18:15.249183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.249483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.249804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.249817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.250057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.250249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.250261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.250618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.250797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.250809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.251123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.251297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.251309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.251605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.251872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.251884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.252124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.252366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.252378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.252668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.252969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.252981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.253252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.253510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.253521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.253767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.254062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.254074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.254335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.254592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.254604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.254929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.255172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.255184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.255530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.255764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.255776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.255997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.256179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.256192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.256432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.256741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.256754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.256977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.257209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.257221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.257461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.257696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.257708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.258021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.258260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.258272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.258593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.258936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.258948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.259265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.259527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.259539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.259768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.260065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.260076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.260261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.260571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.260584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.260867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.261092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.261104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.261391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.261585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.261597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.261886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.262119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.262131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.875 qpair failed and we were unable to recover it. 00:32:42.875 [2024-07-24 23:18:15.262302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.875 [2024-07-24 23:18:15.262603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.262615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.262870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.263196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.263208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.263519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.263828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.263840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.264040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.264261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.264273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.264588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.264935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.264947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.265247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.265444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.265456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.265779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.266086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.266098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.266290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.266543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.266555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.266872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.267179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.267191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.267457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.267754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.267767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.267935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.268174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.268185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.268515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.268759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.268772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.269012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.269318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.269330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.269644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.269936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.269949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.270210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.270443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.270455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.270796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.271076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.271088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.271427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.271734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.271746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.272039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.272326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.272337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.272669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.272977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.272989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.273160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.273342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.273354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.273617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.273890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.273902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.274122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.274384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.274396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.274708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.275029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.275042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.275353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.275642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.275654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.275954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.276184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.876 [2024-07-24 23:18:15.276196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:42.876 qpair failed and we were unable to recover it. 00:32:42.876 [2024-07-24 23:18:15.276489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.276817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.276830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.145 qpair failed and we were unable to recover it. 00:32:43.145 [2024-07-24 23:18:15.277075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.277313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.277325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.145 qpair failed and we were unable to recover it. 00:32:43.145 [2024-07-24 23:18:15.277659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.277999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.278011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.145 qpair failed and we were unable to recover it. 00:32:43.145 [2024-07-24 23:18:15.278347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.278642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.278653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.145 qpair failed and we were unable to recover it. 00:32:43.145 [2024-07-24 23:18:15.278947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.279182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.279194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.145 qpair failed and we were unable to recover it. 00:32:43.145 [2024-07-24 23:18:15.279514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.279824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.279836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.145 qpair failed and we were unable to recover it. 00:32:43.145 [2024-07-24 23:18:15.280070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.280324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.280336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.145 qpair failed and we were unable to recover it. 00:32:43.145 [2024-07-24 23:18:15.280647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.280950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.280962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.145 qpair failed and we were unable to recover it. 00:32:43.145 [2024-07-24 23:18:15.281273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.281465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.281477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.145 qpair failed and we were unable to recover it. 00:32:43.145 [2024-07-24 23:18:15.281732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.282009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.282021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.145 qpair failed and we were unable to recover it. 00:32:43.145 [2024-07-24 23:18:15.282265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.282571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.282582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.145 qpair failed and we were unable to recover it. 00:32:43.145 [2024-07-24 23:18:15.282911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.283168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.145 [2024-07-24 23:18:15.283180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.145 qpair failed and we were unable to recover it. 00:32:43.145 [2024-07-24 23:18:15.283517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.283748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.283761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.283928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.284077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.284090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.284404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.284570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.284581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.284917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.285158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.285169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.285407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.285720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.285732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.285989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.286228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.286241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.286500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.286734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.286746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.286959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.287290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.287302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.287619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.287933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.287949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.288191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.288424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.288436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.288801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.289044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.289055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.289299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.289642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.289654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.289890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.290064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.290076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.290363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.290648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.290659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.290957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.291241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.291253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.291513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.291828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.291840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.292107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.292418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.292429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.292673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.292896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.292908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.293102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.293341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.293354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.293615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.293854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.293866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.294108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.294349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.294361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.294681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.294860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.294872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.295163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.295430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.295442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.295708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.295947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.295960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.296228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.296419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.296430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.296660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.296952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.296964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.297230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.297486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.297498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.297727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.298013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.298024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.146 qpair failed and we were unable to recover it. 00:32:43.146 [2024-07-24 23:18:15.298265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.146 [2024-07-24 23:18:15.298606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.298619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.298924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.299156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.299168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.299494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.299733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.299745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.299934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.300179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.300191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.300381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.300684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.300696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.301007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.301317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.301329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.301633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.301958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.301970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.302162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.302344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.302356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.302586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.302916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.302928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.303108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.303348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.303360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.303674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.303939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.303953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.304176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.304533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.304545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.304861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.305168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.305180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.305370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.305535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.305547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.305784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.306024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.306036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.306281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.306599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.306611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.306926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.307172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.307184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.307443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.307740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.307753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.308046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.308212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.308223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.308489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.308721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.308734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.309044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.309329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.309341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.309688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.309929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.309943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.310235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.310502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.310515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.310741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.311045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.311057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.311315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.311625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.311637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.311880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.312071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.312083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.312257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.312510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.312522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.312809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.313047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.313059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.147 qpair failed and we were unable to recover it. 00:32:43.147 [2024-07-24 23:18:15.313290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.313516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.147 [2024-07-24 23:18:15.313529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.313843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.314078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.314089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.314352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.314669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.314681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.314944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.315228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.315240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.315495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.315787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.315800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.316099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.316346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.316357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.316661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.316964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.316975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.317218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.317459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.317471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.317705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.318020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.318032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.318272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.318598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.318611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.318873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.319173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.319185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.319488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.319753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.319766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.319950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.320235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.320247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.320517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.320807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.320819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.321064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.321306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.321317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.321489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.321711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.321726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.321973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.322257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.322269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.322581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.322828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.322840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.323100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.323331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.323344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.323566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.323818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.323831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.324147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.324388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.324400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.324658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.324983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.324995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.325229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.325525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.325537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.325820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.326054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.326065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.326310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.326596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.326607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.326802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.326992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.327005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.327249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.327583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.327595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.327830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.328126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.328138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.328332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.328576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.148 [2024-07-24 23:18:15.328587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.148 qpair failed and we were unable to recover it. 00:32:43.148 [2024-07-24 23:18:15.328893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.329181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.329193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.329368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.329654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.329666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.330021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.330306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.330318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.330673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.330965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.330978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.331223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.331558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.331570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.331862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.332103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.332115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.332359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.332598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.332610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.332773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.332947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.332959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.333179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.333430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.333442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.333734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.333986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.333998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.334312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.334547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.334559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.334851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.335107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.335119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.335296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.335469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.335481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.335723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.335992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.336003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.336292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.336471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.336483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.336779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.336957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.336969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.337084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.337323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.337335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.337575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.337935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.337947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.338121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.338308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.338320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.338610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.338847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.338859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.339099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.339349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.339362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.339528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.339708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.339730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.339985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.340241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.340253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.340480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.340655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.340667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.340940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.341205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.341226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2315f90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.341624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.341897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.341921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.342160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.342457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.342473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.342812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.343052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.343068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.149 qpair failed and we were unable to recover it. 00:32:43.149 [2024-07-24 23:18:15.343266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.149 [2024-07-24 23:18:15.343503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.343518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.343819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.344160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.344176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.344377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.344539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.344555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.344795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.345041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.345057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.345320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.345549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.345565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.345862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.346088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.346105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.346429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.346671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.346687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.346857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.347130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.347146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.347403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.347627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.347642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.347761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.348028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.348044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.348149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.348400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.348416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.348529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.348707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.348727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.348975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.349169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.349184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.349467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.349700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.349721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.349899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.350127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.350142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.350416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.350589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.350605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.350838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.351019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.351034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.351284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.351549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.351565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.351809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.352054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.352070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.352342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.352656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.352671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.352853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.353100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.353116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.353354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.353579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.353595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.353932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.354159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.354175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.354361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.354683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.150 [2024-07-24 23:18:15.354699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.150 qpair failed and we were unable to recover it. 00:32:43.150 [2024-07-24 23:18:15.354974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.355229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.355244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.355411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.355672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.355688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.355943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.356184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.356200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.356450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.356701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.356721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.356983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.357143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.357158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.357393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.357748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.357764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.358011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.358282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.358298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.358554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.358749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.358765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.359061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.359382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.359398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.359573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.359915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.359931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.360202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.360521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.360536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.360836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.361063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.361079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.361353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.361594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.361610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.361932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.362249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.362265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.362431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.362673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.362689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.363017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.363244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.363259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.363545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.363842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.363858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.364089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.364402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.364418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.364662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.364976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.364992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.365267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.365570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.365586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.365765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.365961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.365976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.366232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.366527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.366543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.366865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.367101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.367119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.367424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.367665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.367681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.367987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.368319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.368336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.368604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.368838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.368855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.369114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.369232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.369248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.369423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.369738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.151 [2024-07-24 23:18:15.369754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.151 qpair failed and we were unable to recover it. 00:32:43.151 [2024-07-24 23:18:15.369989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.370242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.370258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.370441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.370735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.370751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.370852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.371150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.371166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.371430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.371678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.371693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.371959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.372276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.372294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.372458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.372755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.372771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.373020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.373213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.373229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.373527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.373819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.373836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.374083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.374376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.374392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.374556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.374876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.374893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.375096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.375394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.375411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.375643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.375870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.375886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.376183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.376502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.376518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.376794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.377062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.377078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.377331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.377507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.377529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.377782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.378092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.378108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.378433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.378606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.378622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.378788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.379129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.379145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.379412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.379663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.379679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.379928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.380242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.380258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.380508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.380754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.380770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.381034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.381274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.381290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.381525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.381752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.381768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.382017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.382246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.382262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.382565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.382867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.382885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.383177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.383415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.383431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.383660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.383954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.383971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.384218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.384445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.384461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.152 qpair failed and we were unable to recover it. 00:32:43.152 [2024-07-24 23:18:15.384647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.384811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.152 [2024-07-24 23:18:15.384827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.385172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.385465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.385481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.385733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.385988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.386005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.386252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.386423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.386439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.386694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.386947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.386963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.387260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.387504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.387520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.387749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.387924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.387940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.388126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.388387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.388403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.388727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.388951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.388967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.389163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.389476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.389492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.389679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.389998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.390014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.390311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.390559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.390575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.390688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.390849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.390865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.391192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.391297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.391312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.391568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.391869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.391885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.392126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.392369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.392385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.392636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.392945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.392961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.393161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.393399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.393415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.393738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.394056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.394072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.394388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.394630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.394645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.394889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.395142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.395158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.395480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.395724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.395741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.395975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.396280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.396296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.396616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.396844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.396860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.397107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.397337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.397354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.397618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.397927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.397943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.398177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.398350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.398366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.398564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.398805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.398821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.399122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.399366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.153 [2024-07-24 23:18:15.399382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.153 qpair failed and we were unable to recover it. 00:32:43.153 [2024-07-24 23:18:15.399626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.399791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.399808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.400043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.400358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.400374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.400631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.400937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.400953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.401266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.401510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.401526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.401788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.402080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.402096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.402270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.402510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.402526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.402847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.403073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.403090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.403285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.403578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.403595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.403898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.404237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.404254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.404574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.404896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.404913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.405143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.405438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.405454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.405645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.405891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.405907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.406149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.406397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.406414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.406665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.406898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.406914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.407159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.407352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.407368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.407600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.407916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.407932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.408107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.408346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.408363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.408709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.408965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.408982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.409305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.409498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.409514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.409781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.410027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.410043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.410288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.410582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.410598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.410828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.410941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.410958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.411065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.411299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.411315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.411549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.411797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.411813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.154 qpair failed and we were unable to recover it. 00:32:43.154 [2024-07-24 23:18:15.412110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.154 [2024-07-24 23:18:15.412340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.412357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.412610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.412905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.412922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.413245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.413471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.413487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.413742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.414069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.414085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.414341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.414586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.414602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.414850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.415091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.415107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.415335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.415630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.415647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.415904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.416060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.416076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.416371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.416546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.416562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.416863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.417184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.417200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.417359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.417531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.417548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.417802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.417993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.418010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.418258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.418583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.418600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.418899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.419153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.419170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8f0000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 A controller has encountered a failure and is being reset. 00:32:43.155 [2024-07-24 23:18:15.419451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.419767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.419780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.419970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.420279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.420291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.420586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.420822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.420834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.421075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.421381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.421393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.421697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.422032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.422044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.422361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.422593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.422604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.422827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.422994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.423006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.423268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.423495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.423507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.423753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.424013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.424026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.424266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.424487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.424499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.424737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.424888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.424900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.425121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.425431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.425443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.425701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.425955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.425967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.426236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.426405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.426417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.155 qpair failed and we were unable to recover it. 00:32:43.155 [2024-07-24 23:18:15.426664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.155 [2024-07-24 23:18:15.426971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.426983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.427245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.427551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.427563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.427747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.427998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.428010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.428325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.428495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.428508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.428681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.428863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.428875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.429099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.429262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.429274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.429555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.429796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.429808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.430126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.430358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.430370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.430633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.430942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.430955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.431267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.431496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.431507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.431685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.431851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.431863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.432105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.432354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.432367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.432653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.432827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.432839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.433101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.433387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.433399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.433584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.433907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.433919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.434141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.434379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.434391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.434682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.434944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.434957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.435220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.435460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.435472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.435637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.435894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.435906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.436125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.436363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.436375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.436609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.436870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.436883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.437126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.437451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.437463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.437685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.437849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.437861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.438090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.438334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.438346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.438510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.438802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.438814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.439057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.439232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.439243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.439535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.439724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.439738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.439960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.440124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.440136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.440430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.440761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.156 [2024-07-24 23:18:15.440774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.156 qpair failed and we were unable to recover it. 00:32:43.156 [2024-07-24 23:18:15.440940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.441159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.441171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.441460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.441697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.441709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.441980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.442168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.442179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.442425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.442736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.442749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.443026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.443328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.443340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.443627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.443914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.443926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.444228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.444388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.444401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.444621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.444964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.444976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.445273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.445489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.445501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.445740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.445997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.446009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.446321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.446490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.446503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.446772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.447100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.447112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.447427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.447644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.447656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.447836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.448072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.448084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.448373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.448657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.448670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.448889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.449168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.449180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.449474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.449717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.449729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.450018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.450313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.450326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.450634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.450870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.450883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.451135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.451363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.451375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.451658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.451891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.451903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.452137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.452391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.452403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.452641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.452946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.452958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.453203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.453352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.453364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.453601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.453763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.453775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.453994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.454181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.454193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.454431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.454590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.454602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.454851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.455075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.455088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.157 qpair failed and we were unable to recover it. 00:32:43.157 [2024-07-24 23:18:15.455349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-24 23:18:15.455513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.455525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.455702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.455946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.455958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.456179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.456462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.456474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.456810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.457168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.457180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.457368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.457596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.457609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.457709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.457976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.457987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.458226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.458450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.458462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.458617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.458946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.458958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.459048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.459282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.459293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.459527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.459812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.459826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.460063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.460358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.460370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.460594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.460870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.460882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.461136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.461424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.461436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.461673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.461907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.461919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.462228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.462533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.462545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.462776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.463011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.463023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.463335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.463604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.463616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.463876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.464122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.464133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.464470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.464659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.464671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.464893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.465128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.465141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.465399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.465638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.465649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.465942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.466276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.466288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.466526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.466822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.466834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.467167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.467470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.467481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.467718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.467954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.467966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.468300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.468599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.468611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.468922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.469153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.469164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.469393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.469561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.469573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.158 [2024-07-24 23:18:15.469864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.470172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-24 23:18:15.470184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.158 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.470375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.470688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.470701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.470870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.471104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.471116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.471407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.471716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.471729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.471958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.472245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.472257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.472549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.472837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.472849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.473069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.473380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.473391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.473705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.474029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.474041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.474261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.474572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.474583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.474815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.475146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.475158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.475470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.475725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.475736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.476080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.476335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.476348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.476588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.476844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.476856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.477112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.477397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.477408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.477720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.478047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.478058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.478365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.478597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.478609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.478864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.479191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.479203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.479510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.479741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.479752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.479989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.480278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.480290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.480513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.480747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.480759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.481073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.481302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.481313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.481486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.481796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.481808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.482119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.482433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.482445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.482726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.482958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.482971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.159 qpair failed and we were unable to recover it. 00:32:43.159 [2024-07-24 23:18:15.483286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.159 [2024-07-24 23:18:15.483615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.483627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.483937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.484177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.484189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.484423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 23:18:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:43.160 [2024-07-24 23:18:15.484680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.484691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 23:18:15 -- common/autotest_common.sh@852 -- # return 0 00:32:43.160 [2024-07-24 23:18:15.484941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.485272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.485284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 23:18:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:43.160 [2024-07-24 23:18:15.485536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 23:18:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:43.160 [2024-07-24 23:18:15.485783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.485795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 23:18:15 -- common/autotest_common.sh@10 -- # set +x 00:32:43.160 [2024-07-24 23:18:15.486107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.486411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.486423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.486682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.487007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.487019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.487310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.487597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.487609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.487854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.488146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.488158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.488496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.488852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.488864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.489175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.489407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.489420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.489726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.490034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.490046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.490384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.490639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.490650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.490872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.491177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.491190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.491427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.491664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.491675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.491976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.492309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.492321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.492534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.492820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.492832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.493153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.493466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.493478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.493723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.493977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.493989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.494232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.494528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.494541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.494875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.495127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.495140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.495374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.495621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.495633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.495938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.496150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.496161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.496493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.496803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.496815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.497056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.497345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.497358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.497601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.497871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.497883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.498063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.498259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.498270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.160 [2024-07-24 23:18:15.498572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.498797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.160 [2024-07-24 23:18:15.498809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.160 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.499018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.499318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.499332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.499504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.499723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.499737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.499968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.500188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.500201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.500376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.500683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.500696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.500899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.501202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.501214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.501545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.501850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.501862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.502048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.502285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.502298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.502519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.502826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.502838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.503152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.503422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.503434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.503758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.504046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.504058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.504365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.504597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.504608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.504766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.505008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.505021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.505284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.505501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.505513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.505789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.506047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.506059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.506251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.506519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.506531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.506846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.507185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.507198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.507498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.507695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.507707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.507948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.508170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.508183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.508359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.508654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.508666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.508909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.509226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.509238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.509459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.509625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.509637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.509828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.510010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.510022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.510287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.510526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.510538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.510703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.510899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.510911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.511032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.511309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.511321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.511541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.511710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.511725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.512040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.512259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.512271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.512511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.512756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.161 [2024-07-24 23:18:15.512769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.161 qpair failed and we were unable to recover it. 00:32:43.161 [2024-07-24 23:18:15.513006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.513180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.513192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.513355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.513608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.513620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.513853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.514075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.514087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.514323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.514575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.514587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.514768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.515000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.515012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.515178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.515417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.515429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.515582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.515799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.515811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.516046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.516300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.516312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.516631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.516899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.516911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.517200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.517509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.517521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.517768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.518097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.518109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.518367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.518644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.518656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.518906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.519221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.519233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.519454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.519669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.519682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.519921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.520138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.520150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.520295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.520534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.520546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.520863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.521094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.521107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.521354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.521534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.521546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.521791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.522078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.522090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.522382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.522690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.522703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.522983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.523293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.523305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.523574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.523868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.523884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.524125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.524461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.524473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.524833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.525067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.525078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.525396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 23:18:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:43.162 [2024-07-24 23:18:15.525600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.525614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.525867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 23:18:15 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:43.162 [2024-07-24 23:18:15.526092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.526105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 23:18:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.162 [2024-07-24 23:18:15.526345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 23:18:15 -- common/autotest_common.sh@10 -- # set +x 00:32:43.162 [2024-07-24 23:18:15.526711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.526727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.162 qpair failed and we were unable to recover it. 00:32:43.162 [2024-07-24 23:18:15.526963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.162 [2024-07-24 23:18:15.527193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.527204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.527515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.527735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.527747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.528039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.528219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.528230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.528468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.528777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.528790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.529035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.529207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.529219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.529461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.529790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.529802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.530046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.530315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.530327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.530656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.530890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.530902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.531139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.531337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.531349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.531590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.531843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.531855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.532173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.532445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.532456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.532710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.532926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.532938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.533203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.533533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.533545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.533862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.534119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.534131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.534402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.534693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.534704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.534895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.535120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.535132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.535422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.535724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.535736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.535996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.536304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.536317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.536634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.536944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.536956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.537223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.537443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.537455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.537775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.538093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.538106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.538328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.538615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.538627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.538922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.539139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.539151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.539322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.539630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.539642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.539941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.540129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.540141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.540433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.540741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.540754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.540997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.541311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.541325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.541642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.541831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.541843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.163 [2024-07-24 23:18:15.542111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.542379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.163 [2024-07-24 23:18:15.542392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.163 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.542708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.542922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.542935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.543178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.543431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.543443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.543753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.544017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.544029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.544273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.544615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.544627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.544910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.545151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.545162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.545455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 Malloc0 00:32:43.164 [2024-07-24 23:18:15.545770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.545782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.546036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.546289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.546300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 23:18:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.164 [2024-07-24 23:18:15.546557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 23:18:15 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:43.164 [2024-07-24 23:18:15.546842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.546855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 23:18:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.164 [2024-07-24 23:18:15.547106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 23:18:15 -- common/autotest_common.sh@10 -- # set +x 00:32:43.164 [2024-07-24 23:18:15.547371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.547383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.547670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.547966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.547978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.548295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.548541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.548553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.548775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.549063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.549075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.549379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.549616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.549628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.549935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.550272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.550284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.550599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.550859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.550871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.551112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.551414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.551426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.551688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.551913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.551925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.552241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.552489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.552500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.552760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.553047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.553058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.553106] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:43.164 [2024-07-24 23:18:15.553313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.553607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.553619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.553921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.554247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.554259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.554569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.554875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.554887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.555204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.555463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.555474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.164 qpair failed and we were unable to recover it. 00:32:43.164 [2024-07-24 23:18:15.555710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.164 [2024-07-24 23:18:15.556007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.556018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.165 qpair failed and we were unable to recover it. 00:32:43.165 [2024-07-24 23:18:15.556306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.556617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.556629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.165 qpair failed and we were unable to recover it. 00:32:43.165 [2024-07-24 23:18:15.556958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.557145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.557156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.165 qpair failed and we were unable to recover it. 00:32:43.165 [2024-07-24 23:18:15.557324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.557631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.557643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.165 qpair failed and we were unable to recover it. 00:32:43.165 [2024-07-24 23:18:15.557888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.558120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.558132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.165 qpair failed and we were unable to recover it. 00:32:43.165 [2024-07-24 23:18:15.558456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.558740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.558752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.165 qpair failed and we were unable to recover it. 00:32:43.165 [2024-07-24 23:18:15.559040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.559348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.559360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.165 qpair failed and we were unable to recover it. 00:32:43.165 [2024-07-24 23:18:15.559616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.559926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.559937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.165 qpair failed and we were unable to recover it. 00:32:43.165 [2024-07-24 23:18:15.560169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.560461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.560472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.165 qpair failed and we were unable to recover it. 00:32:43.165 [2024-07-24 23:18:15.560766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.561053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.561065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.165 qpair failed and we were unable to recover it. 00:32:43.165 [2024-07-24 23:18:15.561355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.561606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.561617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.165 qpair failed and we were unable to recover it. 00:32:43.165 23:18:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.165 [2024-07-24 23:18:15.561928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.562242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.562253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.165 qpair failed and we were unable to recover it. 00:32:43.165 23:18:15 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:43.165 [2024-07-24 23:18:15.562575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 23:18:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.165 [2024-07-24 23:18:15.562817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.562829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.165 qpair failed and we were unable to recover it. 00:32:43.165 23:18:15 -- common/autotest_common.sh@10 -- # set +x 00:32:43.165 [2024-07-24 23:18:15.563083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.563378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.563390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.165 qpair failed and we were unable to recover it. 00:32:43.165 [2024-07-24 23:18:15.563634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.563932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.563944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.165 qpair failed and we were unable to recover it. 00:32:43.165 [2024-07-24 23:18:15.564254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.564573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.165 [2024-07-24 23:18:15.564585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.165 qpair failed and we were unable to recover it. 00:32:43.425 [2024-07-24 23:18:15.564827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.425 [2024-07-24 23:18:15.565089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.565101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.565395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.565718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.565729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.566068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.566333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.566345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.566536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.566821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.566833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.567182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.567495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.567508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.567756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.567999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.568011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.568252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.568577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.568588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.568825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.569065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.569077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.569317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.569631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.569643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.569880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 23:18:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.426 [2024-07-24 23:18:15.570213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.570225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 23:18:15 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:43.426 [2024-07-24 23:18:15.570543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 23:18:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.426 23:18:15 -- common/autotest_common.sh@10 -- # set +x 00:32:43.426 [2024-07-24 23:18:15.570870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.570883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.571194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.571439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.571451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.571692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.571940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.571952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.572192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.572412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.572423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.572718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.572955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.572967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.573145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.573451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.573463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.573780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.574087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.574099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.574376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.574602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.574614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.574844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.575154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.575165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.575405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.575670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.575682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.575912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.576243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.576254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.576578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.576873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.576884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.577200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.577525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.577537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 [2024-07-24 23:18:15.577690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 23:18:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.426 [2024-07-24 23:18:15.578023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.578036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 23:18:15 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:43.426 [2024-07-24 23:18:15.578327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 23:18:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.426 [2024-07-24 23:18:15.578637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.426 [2024-07-24 23:18:15.578649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.426 qpair failed and we were unable to recover it. 00:32:43.426 23:18:15 -- common/autotest_common.sh@10 -- # set +x 00:32:43.426 [2024-07-24 23:18:15.578956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.427 [2024-07-24 23:18:15.579254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.427 [2024-07-24 23:18:15.579265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.427 qpair failed and we were unable to recover it. 00:32:43.427 [2024-07-24 23:18:15.579520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.427 [2024-07-24 23:18:15.579826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.427 [2024-07-24 23:18:15.579838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.427 qpair failed and we were unable to recover it. 00:32:43.427 [2024-07-24 23:18:15.580080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.427 [2024-07-24 23:18:15.580391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.427 [2024-07-24 23:18:15.580403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.427 qpair failed and we were unable to recover it. 00:32:43.427 [2024-07-24 23:18:15.580690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.427 [2024-07-24 23:18:15.581010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.427 [2024-07-24 23:18:15.581022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8e8000b90 with addr=10.0.0.2, port=4420 00:32:43.427 qpair failed and we were unable to recover it. 00:32:43.427 [2024-07-24 23:18:15.581312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.427 [2024-07-24 23:18:15.581362] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.427 [2024-07-24 23:18:15.583753] posix.c: 670:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:32:43.427 [2024-07-24 23:18:15.583795] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ff8e8000b90 (107): Transport endpoint is not connected 00:32:43.427 [2024-07-24 23:18:15.583843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.427 qpair failed and we were unable to recover it. 00:32:43.427 23:18:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.427 23:18:15 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:43.427 23:18:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.427 23:18:15 -- common/autotest_common.sh@10 -- # set +x 00:32:43.427 [2024-07-24 23:18:15.593665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.427 [2024-07-24 23:18:15.593762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.427 [2024-07-24 23:18:15.593784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.427 [2024-07-24 23:18:15.593795] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.427 [2024-07-24 23:18:15.593804] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.427 [2024-07-24 23:18:15.593827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.427 qpair failed and we were unable to recover it. 00:32:43.427 23:18:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.427 23:18:15 -- host/target_disconnect.sh@58 -- # wait 3413441 00:32:43.427 [2024-07-24 23:18:15.603670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.427 [2024-07-24 23:18:15.603849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.427 [2024-07-24 23:18:15.603869] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.427 [2024-07-24 23:18:15.603879] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.427 [2024-07-24 23:18:15.603887] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.427 [2024-07-24 23:18:15.603907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.427 qpair failed and we were unable to recover it. 00:32:43.427 [2024-07-24 23:18:15.613663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.427 [2024-07-24 23:18:15.613762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.427 [2024-07-24 23:18:15.613781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.427 [2024-07-24 23:18:15.613791] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.427 [2024-07-24 23:18:15.613799] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.427 [2024-07-24 23:18:15.613827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.427 qpair failed and we were unable to recover it. 00:32:43.427 [2024-07-24 23:18:15.623651] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.427 [2024-07-24 23:18:15.623744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.427 [2024-07-24 23:18:15.623762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.427 [2024-07-24 23:18:15.623772] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.427 [2024-07-24 23:18:15.623781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.427 [2024-07-24 23:18:15.623800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.427 qpair failed and we were unable to recover it. 00:32:43.427 [2024-07-24 23:18:15.633677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.427 [2024-07-24 23:18:15.633837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.427 [2024-07-24 23:18:15.633856] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.427 [2024-07-24 23:18:15.633866] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.427 [2024-07-24 23:18:15.633875] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.427 [2024-07-24 23:18:15.633893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.427 qpair failed and we were unable to recover it. 00:32:43.427 [2024-07-24 23:18:15.643697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.427 [2024-07-24 23:18:15.643787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.427 [2024-07-24 23:18:15.643806] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.427 [2024-07-24 23:18:15.643816] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.427 [2024-07-24 23:18:15.643824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.427 [2024-07-24 23:18:15.643842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.427 qpair failed and we were unable to recover it. 00:32:43.427 [2024-07-24 23:18:15.653699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.427 [2024-07-24 23:18:15.653865] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.427 [2024-07-24 23:18:15.653884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.427 [2024-07-24 23:18:15.653894] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.427 [2024-07-24 23:18:15.653903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.427 [2024-07-24 23:18:15.653921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.427 qpair failed and we were unable to recover it. 00:32:43.427 [2024-07-24 23:18:15.663712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.427 [2024-07-24 23:18:15.663802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.427 [2024-07-24 23:18:15.663821] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.427 [2024-07-24 23:18:15.663830] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.427 [2024-07-24 23:18:15.663839] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.427 [2024-07-24 23:18:15.663859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.427 qpair failed and we were unable to recover it. 00:32:43.427 [2024-07-24 23:18:15.673785] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.427 [2024-07-24 23:18:15.673875] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.428 [2024-07-24 23:18:15.673893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.428 [2024-07-24 23:18:15.673902] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.428 [2024-07-24 23:18:15.673911] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.428 [2024-07-24 23:18:15.673929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.428 qpair failed and we were unable to recover it. 00:32:43.428 [2024-07-24 23:18:15.683825] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.428 [2024-07-24 23:18:15.683907] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.428 [2024-07-24 23:18:15.683925] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.428 [2024-07-24 23:18:15.683937] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.428 [2024-07-24 23:18:15.683946] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.428 [2024-07-24 23:18:15.683965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.428 qpair failed and we were unable to recover it. 00:32:43.428 [2024-07-24 23:18:15.693863] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.428 [2024-07-24 23:18:15.693950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.428 [2024-07-24 23:18:15.693967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.428 [2024-07-24 23:18:15.693976] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.428 [2024-07-24 23:18:15.693985] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.428 [2024-07-24 23:18:15.694003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.428 qpair failed and we were unable to recover it. 00:32:43.428 [2024-07-24 23:18:15.703882] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.428 [2024-07-24 23:18:15.703974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.428 [2024-07-24 23:18:15.703992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.428 [2024-07-24 23:18:15.704002] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.428 [2024-07-24 23:18:15.704011] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.428 [2024-07-24 23:18:15.704031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.428 qpair failed and we were unable to recover it. 00:32:43.428 [2024-07-24 23:18:15.713853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.428 [2024-07-24 23:18:15.713936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.428 [2024-07-24 23:18:15.713955] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.428 [2024-07-24 23:18:15.713965] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.428 [2024-07-24 23:18:15.713976] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.428 [2024-07-24 23:18:15.713995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.428 qpair failed and we were unable to recover it. 00:32:43.428 [2024-07-24 23:18:15.724046] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.428 [2024-07-24 23:18:15.724130] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.428 [2024-07-24 23:18:15.724149] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.428 [2024-07-24 23:18:15.724160] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.428 [2024-07-24 23:18:15.724168] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.428 [2024-07-24 23:18:15.724188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.428 qpair failed and we were unable to recover it. 00:32:43.428 [2024-07-24 23:18:15.733975] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.428 [2024-07-24 23:18:15.734109] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.428 [2024-07-24 23:18:15.734127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.428 [2024-07-24 23:18:15.734137] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.428 [2024-07-24 23:18:15.734145] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.428 [2024-07-24 23:18:15.734164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.428 qpair failed and we were unable to recover it. 00:32:43.428 [2024-07-24 23:18:15.743998] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.428 [2024-07-24 23:18:15.744089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.428 [2024-07-24 23:18:15.744107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.428 [2024-07-24 23:18:15.744117] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.428 [2024-07-24 23:18:15.744126] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.428 [2024-07-24 23:18:15.744146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.428 qpair failed and we were unable to recover it. 00:32:43.428 [2024-07-24 23:18:15.754029] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.428 [2024-07-24 23:18:15.754112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.428 [2024-07-24 23:18:15.754130] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.428 [2024-07-24 23:18:15.754140] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.428 [2024-07-24 23:18:15.754148] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.428 [2024-07-24 23:18:15.754166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.428 qpair failed and we were unable to recover it. 00:32:43.428 [2024-07-24 23:18:15.764078] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.428 [2024-07-24 23:18:15.764160] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.428 [2024-07-24 23:18:15.764178] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.428 [2024-07-24 23:18:15.764187] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.428 [2024-07-24 23:18:15.764196] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.428 [2024-07-24 23:18:15.764214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.428 qpair failed and we were unable to recover it. 00:32:43.428 [2024-07-24 23:18:15.774069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.428 [2024-07-24 23:18:15.774162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.428 [2024-07-24 23:18:15.774180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.428 [2024-07-24 23:18:15.774193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.428 [2024-07-24 23:18:15.774201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.428 [2024-07-24 23:18:15.774220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.428 qpair failed and we were unable to recover it. 00:32:43.428 [2024-07-24 23:18:15.784134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.428 [2024-07-24 23:18:15.784213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.428 [2024-07-24 23:18:15.784229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.428 [2024-07-24 23:18:15.784238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.428 [2024-07-24 23:18:15.784247] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.428 [2024-07-24 23:18:15.784265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.428 qpair failed and we were unable to recover it. 00:32:43.428 [2024-07-24 23:18:15.794153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.428 [2024-07-24 23:18:15.794240] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.428 [2024-07-24 23:18:15.794258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.429 [2024-07-24 23:18:15.794267] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.429 [2024-07-24 23:18:15.794276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.429 [2024-07-24 23:18:15.794293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.429 qpair failed and we were unable to recover it. 00:32:43.429 [2024-07-24 23:18:15.804187] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.429 [2024-07-24 23:18:15.804268] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.429 [2024-07-24 23:18:15.804285] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.429 [2024-07-24 23:18:15.804294] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.429 [2024-07-24 23:18:15.804302] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.429 [2024-07-24 23:18:15.804320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.429 qpair failed and we were unable to recover it. 00:32:43.429 [2024-07-24 23:18:15.814175] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.429 [2024-07-24 23:18:15.814259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.429 [2024-07-24 23:18:15.814277] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.429 [2024-07-24 23:18:15.814286] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.429 [2024-07-24 23:18:15.814295] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.429 [2024-07-24 23:18:15.814313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.429 qpair failed and we were unable to recover it. 00:32:43.429 [2024-07-24 23:18:15.824211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.429 [2024-07-24 23:18:15.824290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.429 [2024-07-24 23:18:15.824308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.429 [2024-07-24 23:18:15.824317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.429 [2024-07-24 23:18:15.824326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.429 [2024-07-24 23:18:15.824344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.429 qpair failed and we were unable to recover it. 00:32:43.429 [2024-07-24 23:18:15.834368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.429 [2024-07-24 23:18:15.834462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.429 [2024-07-24 23:18:15.834481] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.429 [2024-07-24 23:18:15.834490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.429 [2024-07-24 23:18:15.834499] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.429 [2024-07-24 23:18:15.834517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.429 qpair failed and we were unable to recover it. 00:32:43.429 [2024-07-24 23:18:15.844356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.429 [2024-07-24 23:18:15.844443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.429 [2024-07-24 23:18:15.844461] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.429 [2024-07-24 23:18:15.844471] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.429 [2024-07-24 23:18:15.844479] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.429 [2024-07-24 23:18:15.844498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.429 qpair failed and we were unable to recover it. 00:32:43.689 [2024-07-24 23:18:15.854424] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.689 [2024-07-24 23:18:15.854513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.689 [2024-07-24 23:18:15.854531] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.689 [2024-07-24 23:18:15.854540] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.689 [2024-07-24 23:18:15.854549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.689 [2024-07-24 23:18:15.854567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.689 qpair failed and we were unable to recover it. 00:32:43.689 [2024-07-24 23:18:15.864420] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.689 [2024-07-24 23:18:15.864499] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.689 [2024-07-24 23:18:15.864522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.689 [2024-07-24 23:18:15.864531] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.689 [2024-07-24 23:18:15.864540] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.689 [2024-07-24 23:18:15.864558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.689 qpair failed and we were unable to recover it. 00:32:43.689 [2024-07-24 23:18:15.874387] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.689 [2024-07-24 23:18:15.874474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.689 [2024-07-24 23:18:15.874492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.689 [2024-07-24 23:18:15.874501] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.689 [2024-07-24 23:18:15.874510] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.689 [2024-07-24 23:18:15.874528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.689 qpair failed and we were unable to recover it. 00:32:43.689 [2024-07-24 23:18:15.884445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.689 [2024-07-24 23:18:15.884522] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.689 [2024-07-24 23:18:15.884540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.689 [2024-07-24 23:18:15.884549] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.689 [2024-07-24 23:18:15.884558] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.689 [2024-07-24 23:18:15.884576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.689 qpair failed and we were unable to recover it. 00:32:43.689 [2024-07-24 23:18:15.894429] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.689 [2024-07-24 23:18:15.894510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.689 [2024-07-24 23:18:15.894529] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.689 [2024-07-24 23:18:15.894538] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.689 [2024-07-24 23:18:15.894547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.689 [2024-07-24 23:18:15.894565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.689 qpair failed and we were unable to recover it. 00:32:43.690 [2024-07-24 23:18:15.904466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.690 [2024-07-24 23:18:15.904549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.690 [2024-07-24 23:18:15.904567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.690 [2024-07-24 23:18:15.904576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.690 [2024-07-24 23:18:15.904585] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.690 [2024-07-24 23:18:15.904608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.690 qpair failed and we were unable to recover it. 00:32:43.690 [2024-07-24 23:18:15.914458] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.690 [2024-07-24 23:18:15.914536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.690 [2024-07-24 23:18:15.914554] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.690 [2024-07-24 23:18:15.914564] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.690 [2024-07-24 23:18:15.914572] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.690 [2024-07-24 23:18:15.914591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.690 qpair failed and we were unable to recover it. 00:32:43.690 [2024-07-24 23:18:15.924519] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.690 [2024-07-24 23:18:15.924597] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.690 [2024-07-24 23:18:15.924615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.690 [2024-07-24 23:18:15.924624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.690 [2024-07-24 23:18:15.924633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.690 [2024-07-24 23:18:15.924651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.690 qpair failed and we were unable to recover it. 00:32:43.690 [2024-07-24 23:18:15.934545] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.690 [2024-07-24 23:18:15.934628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.690 [2024-07-24 23:18:15.934646] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.690 [2024-07-24 23:18:15.934656] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.690 [2024-07-24 23:18:15.934665] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.690 [2024-07-24 23:18:15.934682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.690 qpair failed and we were unable to recover it. 00:32:43.690 [2024-07-24 23:18:15.944597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.690 [2024-07-24 23:18:15.944676] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.690 [2024-07-24 23:18:15.944694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.690 [2024-07-24 23:18:15.944704] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.690 [2024-07-24 23:18:15.944713] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.690 [2024-07-24 23:18:15.944735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.690 qpair failed and we were unable to recover it. 00:32:43.690 [2024-07-24 23:18:15.954614] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.690 [2024-07-24 23:18:15.954688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.690 [2024-07-24 23:18:15.954710] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.690 [2024-07-24 23:18:15.954723] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.690 [2024-07-24 23:18:15.954732] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.690 [2024-07-24 23:18:15.954751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.690 qpair failed and we were unable to recover it. 00:32:43.690 [2024-07-24 23:18:15.964638] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.690 [2024-07-24 23:18:15.964724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.690 [2024-07-24 23:18:15.964742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.690 [2024-07-24 23:18:15.964752] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.690 [2024-07-24 23:18:15.964760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.690 [2024-07-24 23:18:15.964778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.690 qpair failed and we were unable to recover it. 00:32:43.690 [2024-07-24 23:18:15.974658] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.690 [2024-07-24 23:18:15.974749] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.690 [2024-07-24 23:18:15.974767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.690 [2024-07-24 23:18:15.974776] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.690 [2024-07-24 23:18:15.974785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.690 [2024-07-24 23:18:15.974803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.690 qpair failed and we were unable to recover it. 00:32:43.690 [2024-07-24 23:18:15.984704] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.690 [2024-07-24 23:18:15.984808] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.690 [2024-07-24 23:18:15.984827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.690 [2024-07-24 23:18:15.984836] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.690 [2024-07-24 23:18:15.984845] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.690 [2024-07-24 23:18:15.984864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.690 qpair failed and we were unable to recover it. 00:32:43.690 [2024-07-24 23:18:15.994732] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.690 [2024-07-24 23:18:15.994814] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.690 [2024-07-24 23:18:15.994832] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.690 [2024-07-24 23:18:15.994841] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.690 [2024-07-24 23:18:15.994850] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.690 [2024-07-24 23:18:15.994872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.690 qpair failed and we were unable to recover it. 00:32:43.690 [2024-07-24 23:18:16.004767] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.690 [2024-07-24 23:18:16.004850] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.690 [2024-07-24 23:18:16.004868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.690 [2024-07-24 23:18:16.004878] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.690 [2024-07-24 23:18:16.004887] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.690 [2024-07-24 23:18:16.004905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.690 qpair failed and we were unable to recover it. 00:32:43.690 [2024-07-24 23:18:16.014807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.690 [2024-07-24 23:18:16.014894] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.690 [2024-07-24 23:18:16.014912] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.690 [2024-07-24 23:18:16.014922] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.690 [2024-07-24 23:18:16.014931] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.690 [2024-07-24 23:18:16.014949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.690 qpair failed and we were unable to recover it. 00:32:43.690 [2024-07-24 23:18:16.024810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.690 [2024-07-24 23:18:16.024884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.690 [2024-07-24 23:18:16.024902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.690 [2024-07-24 23:18:16.024911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.690 [2024-07-24 23:18:16.024920] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.690 [2024-07-24 23:18:16.024939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.690 qpair failed and we were unable to recover it. 00:32:43.690 [2024-07-24 23:18:16.034865] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.690 [2024-07-24 23:18:16.034943] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.691 [2024-07-24 23:18:16.034961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.691 [2024-07-24 23:18:16.034970] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.691 [2024-07-24 23:18:16.034979] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.691 [2024-07-24 23:18:16.034997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.691 qpair failed and we were unable to recover it. 00:32:43.691 [2024-07-24 23:18:16.044879] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.691 [2024-07-24 23:18:16.044958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.691 [2024-07-24 23:18:16.044980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.691 [2024-07-24 23:18:16.044989] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.691 [2024-07-24 23:18:16.044998] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.691 [2024-07-24 23:18:16.045018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.691 qpair failed and we were unable to recover it. 00:32:43.691 [2024-07-24 23:18:16.054904] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.691 [2024-07-24 23:18:16.054985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.691 [2024-07-24 23:18:16.055003] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.691 [2024-07-24 23:18:16.055013] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.691 [2024-07-24 23:18:16.055022] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.691 [2024-07-24 23:18:16.055040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.691 qpair failed and we were unable to recover it. 00:32:43.691 [2024-07-24 23:18:16.064870] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.691 [2024-07-24 23:18:16.065026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.691 [2024-07-24 23:18:16.065044] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.691 [2024-07-24 23:18:16.065053] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.691 [2024-07-24 23:18:16.065062] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.691 [2024-07-24 23:18:16.065080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.691 qpair failed and we were unable to recover it. 00:32:43.691 [2024-07-24 23:18:16.074954] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.691 [2024-07-24 23:18:16.075036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.691 [2024-07-24 23:18:16.075054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.691 [2024-07-24 23:18:16.075063] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.691 [2024-07-24 23:18:16.075072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.691 [2024-07-24 23:18:16.075089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.691 qpair failed and we were unable to recover it. 00:32:43.691 [2024-07-24 23:18:16.085009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.691 [2024-07-24 23:18:16.085089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.691 [2024-07-24 23:18:16.085107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.691 [2024-07-24 23:18:16.085116] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.691 [2024-07-24 23:18:16.085128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.691 [2024-07-24 23:18:16.085146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.691 qpair failed and we were unable to recover it. 00:32:43.691 [2024-07-24 23:18:16.095027] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.691 [2024-07-24 23:18:16.095106] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.691 [2024-07-24 23:18:16.095123] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.691 [2024-07-24 23:18:16.095132] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.691 [2024-07-24 23:18:16.095141] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.691 [2024-07-24 23:18:16.095159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.691 qpair failed and we were unable to recover it. 00:32:43.691 [2024-07-24 23:18:16.104983] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.691 [2024-07-24 23:18:16.105060] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.691 [2024-07-24 23:18:16.105078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.691 [2024-07-24 23:18:16.105087] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.691 [2024-07-24 23:18:16.105096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.691 [2024-07-24 23:18:16.105114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.691 qpair failed and we were unable to recover it. 00:32:43.691 [2024-07-24 23:18:16.115102] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.691 [2024-07-24 23:18:16.115183] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.691 [2024-07-24 23:18:16.115201] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.691 [2024-07-24 23:18:16.115211] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.691 [2024-07-24 23:18:16.115220] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.691 [2024-07-24 23:18:16.115239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.691 qpair failed and we were unable to recover it. 00:32:43.951 [2024-07-24 23:18:16.125083] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.951 [2024-07-24 23:18:16.125165] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.951 [2024-07-24 23:18:16.125184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.951 [2024-07-24 23:18:16.125193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.951 [2024-07-24 23:18:16.125202] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.951 [2024-07-24 23:18:16.125219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.951 qpair failed and we were unable to recover it. 00:32:43.951 [2024-07-24 23:18:16.135112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.951 [2024-07-24 23:18:16.135193] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.951 [2024-07-24 23:18:16.135211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.951 [2024-07-24 23:18:16.135220] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.951 [2024-07-24 23:18:16.135229] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.951 [2024-07-24 23:18:16.135247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.951 qpair failed and we were unable to recover it. 00:32:43.951 [2024-07-24 23:18:16.145099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.951 [2024-07-24 23:18:16.145179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.951 [2024-07-24 23:18:16.145197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.951 [2024-07-24 23:18:16.145206] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.951 [2024-07-24 23:18:16.145215] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.951 [2024-07-24 23:18:16.145233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.951 qpair failed and we were unable to recover it. 00:32:43.951 [2024-07-24 23:18:16.155194] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.951 [2024-07-24 23:18:16.155279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.951 [2024-07-24 23:18:16.155297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.951 [2024-07-24 23:18:16.155307] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.951 [2024-07-24 23:18:16.155315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.951 [2024-07-24 23:18:16.155333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.951 qpair failed and we were unable to recover it. 00:32:43.951 [2024-07-24 23:18:16.165259] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.951 [2024-07-24 23:18:16.165365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.951 [2024-07-24 23:18:16.165383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.951 [2024-07-24 23:18:16.165392] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.951 [2024-07-24 23:18:16.165401] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.951 [2024-07-24 23:18:16.165419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.951 qpair failed and we were unable to recover it. 00:32:43.951 [2024-07-24 23:18:16.175203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.951 [2024-07-24 23:18:16.175293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.951 [2024-07-24 23:18:16.175312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.951 [2024-07-24 23:18:16.175322] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.951 [2024-07-24 23:18:16.175335] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.951 [2024-07-24 23:18:16.175354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.951 qpair failed and we were unable to recover it. 00:32:43.951 [2024-07-24 23:18:16.185293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.951 [2024-07-24 23:18:16.185377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.951 [2024-07-24 23:18:16.185395] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.951 [2024-07-24 23:18:16.185404] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.951 [2024-07-24 23:18:16.185413] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.951 [2024-07-24 23:18:16.185431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.951 qpair failed and we were unable to recover it. 00:32:43.951 [2024-07-24 23:18:16.195297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.951 [2024-07-24 23:18:16.195380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.951 [2024-07-24 23:18:16.195397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.951 [2024-07-24 23:18:16.195407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.951 [2024-07-24 23:18:16.195415] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.951 [2024-07-24 23:18:16.195433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.951 qpair failed and we were unable to recover it. 00:32:43.951 [2024-07-24 23:18:16.205311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.951 [2024-07-24 23:18:16.205389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.951 [2024-07-24 23:18:16.205407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.951 [2024-07-24 23:18:16.205417] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.951 [2024-07-24 23:18:16.205425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.951 [2024-07-24 23:18:16.205443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.951 qpair failed and we were unable to recover it. 00:32:43.951 [2024-07-24 23:18:16.215377] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.951 [2024-07-24 23:18:16.215493] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.951 [2024-07-24 23:18:16.215510] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.951 [2024-07-24 23:18:16.215520] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.951 [2024-07-24 23:18:16.215528] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.951 [2024-07-24 23:18:16.215546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.951 qpair failed and we were unable to recover it. 00:32:43.951 [2024-07-24 23:18:16.225382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.951 [2024-07-24 23:18:16.225459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.951 [2024-07-24 23:18:16.225477] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.951 [2024-07-24 23:18:16.225486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.951 [2024-07-24 23:18:16.225495] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.951 [2024-07-24 23:18:16.225514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.951 qpair failed and we were unable to recover it. 00:32:43.951 [2024-07-24 23:18:16.235437] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.951 [2024-07-24 23:18:16.235520] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.951 [2024-07-24 23:18:16.235537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.952 [2024-07-24 23:18:16.235547] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.952 [2024-07-24 23:18:16.235556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.952 [2024-07-24 23:18:16.235574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.952 qpair failed and we were unable to recover it. 00:32:43.952 [2024-07-24 23:18:16.245442] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.952 [2024-07-24 23:18:16.245602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.952 [2024-07-24 23:18:16.245620] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.952 [2024-07-24 23:18:16.245629] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.952 [2024-07-24 23:18:16.245638] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.952 [2024-07-24 23:18:16.245657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.952 qpair failed and we were unable to recover it. 00:32:43.952 [2024-07-24 23:18:16.255490] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.952 [2024-07-24 23:18:16.255569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.952 [2024-07-24 23:18:16.255586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.952 [2024-07-24 23:18:16.255595] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.952 [2024-07-24 23:18:16.255604] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.952 [2024-07-24 23:18:16.255622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.952 qpair failed and we were unable to recover it. 00:32:43.952 [2024-07-24 23:18:16.265541] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.952 [2024-07-24 23:18:16.265619] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.952 [2024-07-24 23:18:16.265637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.952 [2024-07-24 23:18:16.265650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.952 [2024-07-24 23:18:16.265661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.952 [2024-07-24 23:18:16.265679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.952 qpair failed and we were unable to recover it. 00:32:43.952 [2024-07-24 23:18:16.275590] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.952 [2024-07-24 23:18:16.275701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.952 [2024-07-24 23:18:16.275724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.952 [2024-07-24 23:18:16.275733] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.952 [2024-07-24 23:18:16.275742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.952 [2024-07-24 23:18:16.275760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.952 qpair failed and we were unable to recover it. 00:32:43.952 [2024-07-24 23:18:16.285523] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.952 [2024-07-24 23:18:16.285606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.952 [2024-07-24 23:18:16.285624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.952 [2024-07-24 23:18:16.285634] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.952 [2024-07-24 23:18:16.285643] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.952 [2024-07-24 23:18:16.285660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.952 qpair failed and we were unable to recover it. 00:32:43.952 [2024-07-24 23:18:16.295596] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.952 [2024-07-24 23:18:16.295677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.952 [2024-07-24 23:18:16.295695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.952 [2024-07-24 23:18:16.295704] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.952 [2024-07-24 23:18:16.295713] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.952 [2024-07-24 23:18:16.295736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.952 qpair failed and we were unable to recover it. 00:32:43.952 [2024-07-24 23:18:16.305633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.952 [2024-07-24 23:18:16.305724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.952 [2024-07-24 23:18:16.305743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.952 [2024-07-24 23:18:16.305752] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.952 [2024-07-24 23:18:16.305761] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.952 [2024-07-24 23:18:16.305779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.952 qpair failed and we were unable to recover it. 00:32:43.952 [2024-07-24 23:18:16.315656] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.952 [2024-07-24 23:18:16.315738] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.952 [2024-07-24 23:18:16.315756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.952 [2024-07-24 23:18:16.315766] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.952 [2024-07-24 23:18:16.315775] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.952 [2024-07-24 23:18:16.315793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.952 qpair failed and we were unable to recover it. 00:32:43.952 [2024-07-24 23:18:16.325684] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.952 [2024-07-24 23:18:16.325773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.952 [2024-07-24 23:18:16.325791] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.952 [2024-07-24 23:18:16.325800] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.952 [2024-07-24 23:18:16.325809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.952 [2024-07-24 23:18:16.325828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.952 qpair failed and we were unable to recover it. 00:32:43.952 [2024-07-24 23:18:16.335707] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.952 [2024-07-24 23:18:16.335808] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.952 [2024-07-24 23:18:16.335826] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.952 [2024-07-24 23:18:16.335836] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.952 [2024-07-24 23:18:16.335845] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.952 [2024-07-24 23:18:16.335862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.952 qpair failed and we were unable to recover it. 00:32:43.952 [2024-07-24 23:18:16.345772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.952 [2024-07-24 23:18:16.345856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.952 [2024-07-24 23:18:16.345874] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.952 [2024-07-24 23:18:16.345884] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.952 [2024-07-24 23:18:16.345893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.952 [2024-07-24 23:18:16.345911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.952 qpair failed and we were unable to recover it. 00:32:43.952 [2024-07-24 23:18:16.355759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.952 [2024-07-24 23:18:16.355841] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.952 [2024-07-24 23:18:16.355858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.952 [2024-07-24 23:18:16.355871] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.952 [2024-07-24 23:18:16.355880] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.952 [2024-07-24 23:18:16.355898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.952 qpair failed and we were unable to recover it. 00:32:43.952 [2024-07-24 23:18:16.365784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.952 [2024-07-24 23:18:16.365866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.952 [2024-07-24 23:18:16.365884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.952 [2024-07-24 23:18:16.365893] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.952 [2024-07-24 23:18:16.365902] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.953 [2024-07-24 23:18:16.365920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.953 qpair failed and we were unable to recover it. 00:32:43.953 [2024-07-24 23:18:16.375827] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.953 [2024-07-24 23:18:16.375920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.953 [2024-07-24 23:18:16.375937] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.953 [2024-07-24 23:18:16.375947] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.953 [2024-07-24 23:18:16.375956] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:43.953 [2024-07-24 23:18:16.375974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.953 qpair failed and we were unable to recover it. 00:32:44.212 [2024-07-24 23:18:16.385905] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.212 [2024-07-24 23:18:16.385992] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.212 [2024-07-24 23:18:16.386010] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.212 [2024-07-24 23:18:16.386019] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.212 [2024-07-24 23:18:16.386028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.212 [2024-07-24 23:18:16.386047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.212 qpair failed and we were unable to recover it. 00:32:44.212 [2024-07-24 23:18:16.395907] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.212 [2024-07-24 23:18:16.395984] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.212 [2024-07-24 23:18:16.396002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.212 [2024-07-24 23:18:16.396012] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.212 [2024-07-24 23:18:16.396020] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.212 [2024-07-24 23:18:16.396039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.212 qpair failed and we were unable to recover it. 00:32:44.212 [2024-07-24 23:18:16.405936] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.212 [2024-07-24 23:18:16.406015] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.212 [2024-07-24 23:18:16.406032] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.212 [2024-07-24 23:18:16.406042] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.212 [2024-07-24 23:18:16.406050] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.212 [2024-07-24 23:18:16.406071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.212 qpair failed and we were unable to recover it. 00:32:44.212 [2024-07-24 23:18:16.415948] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.212 [2024-07-24 23:18:16.416037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.212 [2024-07-24 23:18:16.416056] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.212 [2024-07-24 23:18:16.416065] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.212 [2024-07-24 23:18:16.416074] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.212 [2024-07-24 23:18:16.416093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.212 qpair failed and we were unable to recover it. 00:32:44.212 [2024-07-24 23:18:16.425988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.212 [2024-07-24 23:18:16.426070] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.212 [2024-07-24 23:18:16.426088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.212 [2024-07-24 23:18:16.426097] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.212 [2024-07-24 23:18:16.426106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.212 [2024-07-24 23:18:16.426124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.212 qpair failed and we were unable to recover it. 00:32:44.212 [2024-07-24 23:18:16.435939] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.212 [2024-07-24 23:18:16.436022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.212 [2024-07-24 23:18:16.436040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.212 [2024-07-24 23:18:16.436049] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.212 [2024-07-24 23:18:16.436058] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.212 [2024-07-24 23:18:16.436076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.212 qpair failed and we were unable to recover it. 00:32:44.212 [2024-07-24 23:18:16.446041] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.212 [2024-07-24 23:18:16.446123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.212 [2024-07-24 23:18:16.446144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.212 [2024-07-24 23:18:16.446154] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.212 [2024-07-24 23:18:16.446163] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.212 [2024-07-24 23:18:16.446181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.212 qpair failed and we were unable to recover it. 00:32:44.212 [2024-07-24 23:18:16.456060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.212 [2024-07-24 23:18:16.456142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.212 [2024-07-24 23:18:16.456160] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.212 [2024-07-24 23:18:16.456170] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.212 [2024-07-24 23:18:16.456178] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.212 [2024-07-24 23:18:16.456197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.212 qpair failed and we were unable to recover it. 00:32:44.212 [2024-07-24 23:18:16.466098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.212 [2024-07-24 23:18:16.466219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.212 [2024-07-24 23:18:16.466237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.212 [2024-07-24 23:18:16.466246] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.212 [2024-07-24 23:18:16.466255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.212 [2024-07-24 23:18:16.466274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.212 qpair failed and we were unable to recover it. 00:32:44.212 [2024-07-24 23:18:16.476045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.212 [2024-07-24 23:18:16.476134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.212 [2024-07-24 23:18:16.476152] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.212 [2024-07-24 23:18:16.476161] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.212 [2024-07-24 23:18:16.476170] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.212 [2024-07-24 23:18:16.476188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.212 qpair failed and we were unable to recover it. 00:32:44.212 [2024-07-24 23:18:16.486153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.212 [2024-07-24 23:18:16.486233] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.212 [2024-07-24 23:18:16.486251] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.212 [2024-07-24 23:18:16.486260] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.212 [2024-07-24 23:18:16.486269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.212 [2024-07-24 23:18:16.486290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.212 qpair failed and we were unable to recover it. 00:32:44.212 [2024-07-24 23:18:16.496178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.212 [2024-07-24 23:18:16.496265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.212 [2024-07-24 23:18:16.496282] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.212 [2024-07-24 23:18:16.496292] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.212 [2024-07-24 23:18:16.496300] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.212 [2024-07-24 23:18:16.496318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.212 qpair failed and we were unable to recover it. 00:32:44.212 [2024-07-24 23:18:16.506195] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.212 [2024-07-24 23:18:16.506276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.213 [2024-07-24 23:18:16.506294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.213 [2024-07-24 23:18:16.506304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.213 [2024-07-24 23:18:16.506313] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.213 [2024-07-24 23:18:16.506331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.213 qpair failed and we were unable to recover it. 00:32:44.213 [2024-07-24 23:18:16.516185] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.213 [2024-07-24 23:18:16.516267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.213 [2024-07-24 23:18:16.516285] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.213 [2024-07-24 23:18:16.516295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.213 [2024-07-24 23:18:16.516303] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.213 [2024-07-24 23:18:16.516321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.213 qpair failed and we were unable to recover it. 00:32:44.213 [2024-07-24 23:18:16.526270] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.213 [2024-07-24 23:18:16.526365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.213 [2024-07-24 23:18:16.526383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.213 [2024-07-24 23:18:16.526392] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.213 [2024-07-24 23:18:16.526401] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.213 [2024-07-24 23:18:16.526418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.213 qpair failed and we were unable to recover it. 00:32:44.213 [2024-07-24 23:18:16.536229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.213 [2024-07-24 23:18:16.536308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.213 [2024-07-24 23:18:16.536329] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.213 [2024-07-24 23:18:16.536339] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.213 [2024-07-24 23:18:16.536347] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.213 [2024-07-24 23:18:16.536365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.213 qpair failed and we were unable to recover it. 00:32:44.213 [2024-07-24 23:18:16.546311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.213 [2024-07-24 23:18:16.546396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.213 [2024-07-24 23:18:16.546415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.213 [2024-07-24 23:18:16.546425] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.213 [2024-07-24 23:18:16.546434] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.213 [2024-07-24 23:18:16.546452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.213 qpair failed and we were unable to recover it. 00:32:44.213 [2024-07-24 23:18:16.556277] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.213 [2024-07-24 23:18:16.556445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.213 [2024-07-24 23:18:16.556464] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.213 [2024-07-24 23:18:16.556473] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.213 [2024-07-24 23:18:16.556482] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.213 [2024-07-24 23:18:16.556501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.213 qpair failed and we were unable to recover it. 00:32:44.213 [2024-07-24 23:18:16.566307] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.213 [2024-07-24 23:18:16.566383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.213 [2024-07-24 23:18:16.566401] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.213 [2024-07-24 23:18:16.566411] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.213 [2024-07-24 23:18:16.566419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.213 [2024-07-24 23:18:16.566437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.213 qpair failed and we were unable to recover it. 00:32:44.213 [2024-07-24 23:18:16.576342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.213 [2024-07-24 23:18:16.576423] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.213 [2024-07-24 23:18:16.576441] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.213 [2024-07-24 23:18:16.576451] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.213 [2024-07-24 23:18:16.576465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.213 [2024-07-24 23:18:16.576483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.213 qpair failed and we were unable to recover it. 00:32:44.213 [2024-07-24 23:18:16.586347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.213 [2024-07-24 23:18:16.586421] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.213 [2024-07-24 23:18:16.586439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.213 [2024-07-24 23:18:16.586449] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.213 [2024-07-24 23:18:16.586458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.213 [2024-07-24 23:18:16.586476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.213 qpair failed and we were unable to recover it. 00:32:44.213 [2024-07-24 23:18:16.596386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.213 [2024-07-24 23:18:16.596462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.213 [2024-07-24 23:18:16.596480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.213 [2024-07-24 23:18:16.596490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.213 [2024-07-24 23:18:16.596499] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.213 [2024-07-24 23:18:16.596517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.213 qpair failed and we were unable to recover it. 00:32:44.213 [2024-07-24 23:18:16.606495] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.213 [2024-07-24 23:18:16.606577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.213 [2024-07-24 23:18:16.606595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.213 [2024-07-24 23:18:16.606605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.213 [2024-07-24 23:18:16.606613] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.213 [2024-07-24 23:18:16.606631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.213 qpair failed and we were unable to recover it. 00:32:44.213 [2024-07-24 23:18:16.616478] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.213 [2024-07-24 23:18:16.616559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.213 [2024-07-24 23:18:16.616576] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.213 [2024-07-24 23:18:16.616586] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.213 [2024-07-24 23:18:16.616594] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.213 [2024-07-24 23:18:16.616613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.213 qpair failed and we were unable to recover it. 00:32:44.213 [2024-07-24 23:18:16.626542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.213 [2024-07-24 23:18:16.626719] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.213 [2024-07-24 23:18:16.626737] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.213 [2024-07-24 23:18:16.626747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.213 [2024-07-24 23:18:16.626755] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.213 [2024-07-24 23:18:16.626774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.213 qpair failed and we were unable to recover it. 00:32:44.213 [2024-07-24 23:18:16.636556] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.213 [2024-07-24 23:18:16.636643] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.213 [2024-07-24 23:18:16.636661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.213 [2024-07-24 23:18:16.636671] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.213 [2024-07-24 23:18:16.636679] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.214 [2024-07-24 23:18:16.636697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.214 qpair failed and we were unable to recover it. 00:32:44.472 [2024-07-24 23:18:16.646707] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.473 [2024-07-24 23:18:16.646797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.473 [2024-07-24 23:18:16.646816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.473 [2024-07-24 23:18:16.646825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.473 [2024-07-24 23:18:16.646833] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.473 [2024-07-24 23:18:16.646852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.473 qpair failed and we were unable to recover it. 00:32:44.473 [2024-07-24 23:18:16.656612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.473 [2024-07-24 23:18:16.656694] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.473 [2024-07-24 23:18:16.656713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.473 [2024-07-24 23:18:16.656726] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.473 [2024-07-24 23:18:16.656737] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.473 [2024-07-24 23:18:16.656755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.473 qpair failed and we were unable to recover it. 00:32:44.473 [2024-07-24 23:18:16.666679] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.473 [2024-07-24 23:18:16.666766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.473 [2024-07-24 23:18:16.666784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.473 [2024-07-24 23:18:16.666794] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.473 [2024-07-24 23:18:16.666806] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.473 [2024-07-24 23:18:16.666824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.473 qpair failed and we were unable to recover it. 00:32:44.473 [2024-07-24 23:18:16.676691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.473 [2024-07-24 23:18:16.676774] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.473 [2024-07-24 23:18:16.676792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.473 [2024-07-24 23:18:16.676802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.473 [2024-07-24 23:18:16.676810] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.473 [2024-07-24 23:18:16.676828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.473 qpair failed and we were unable to recover it. 00:32:44.473 [2024-07-24 23:18:16.686712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.473 [2024-07-24 23:18:16.686794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.473 [2024-07-24 23:18:16.686812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.473 [2024-07-24 23:18:16.686821] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.473 [2024-07-24 23:18:16.686830] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.473 [2024-07-24 23:18:16.686848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.473 qpair failed and we were unable to recover it. 00:32:44.473 [2024-07-24 23:18:16.696764] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.473 [2024-07-24 23:18:16.696848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.473 [2024-07-24 23:18:16.696866] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.473 [2024-07-24 23:18:16.696875] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.473 [2024-07-24 23:18:16.696884] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.473 [2024-07-24 23:18:16.696901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.473 qpair failed and we were unable to recover it. 00:32:44.473 [2024-07-24 23:18:16.706774] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.473 [2024-07-24 23:18:16.706856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.473 [2024-07-24 23:18:16.706874] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.473 [2024-07-24 23:18:16.706884] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.473 [2024-07-24 23:18:16.706893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.473 [2024-07-24 23:18:16.706911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.473 qpair failed and we were unable to recover it. 00:32:44.473 [2024-07-24 23:18:16.716795] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.473 [2024-07-24 23:18:16.716877] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.473 [2024-07-24 23:18:16.716895] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.473 [2024-07-24 23:18:16.716904] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.473 [2024-07-24 23:18:16.716913] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.473 [2024-07-24 23:18:16.716933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.473 qpair failed and we were unable to recover it. 00:32:44.473 [2024-07-24 23:18:16.726765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.473 [2024-07-24 23:18:16.726845] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.473 [2024-07-24 23:18:16.726863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.473 [2024-07-24 23:18:16.726872] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.473 [2024-07-24 23:18:16.726881] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.473 [2024-07-24 23:18:16.726898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.473 qpair failed and we were unable to recover it. 00:32:44.473 [2024-07-24 23:18:16.736871] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.473 [2024-07-24 23:18:16.736952] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.473 [2024-07-24 23:18:16.736970] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.473 [2024-07-24 23:18:16.736979] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.473 [2024-07-24 23:18:16.736988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.473 [2024-07-24 23:18:16.737006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.473 qpair failed and we were unable to recover it. 00:32:44.473 [2024-07-24 23:18:16.746819] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.473 [2024-07-24 23:18:16.746925] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.473 [2024-07-24 23:18:16.746943] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.473 [2024-07-24 23:18:16.746953] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.473 [2024-07-24 23:18:16.746962] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.473 [2024-07-24 23:18:16.746980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.473 qpair failed and we were unable to recover it. 00:32:44.473 [2024-07-24 23:18:16.756933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.473 [2024-07-24 23:18:16.757008] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.473 [2024-07-24 23:18:16.757026] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.473 [2024-07-24 23:18:16.757039] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.473 [2024-07-24 23:18:16.757048] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8e8000b90 00:32:44.473 [2024-07-24 23:18:16.757067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:44.473 qpair failed and we were unable to recover it. 00:32:44.473 [2024-07-24 23:18:16.766957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.473 [2024-07-24 23:18:16.767132] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.473 [2024-07-24 23:18:16.767165] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.473 [2024-07-24 23:18:16.767181] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.473 [2024-07-24 23:18:16.767195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.473 [2024-07-24 23:18:16.767222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.473 qpair failed and we were unable to recover it. 00:32:44.473 [2024-07-24 23:18:16.776949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.473 [2024-07-24 23:18:16.777031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.474 [2024-07-24 23:18:16.777051] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.474 [2024-07-24 23:18:16.777061] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.474 [2024-07-24 23:18:16.777071] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.474 [2024-07-24 23:18:16.777089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.474 qpair failed and we were unable to recover it. 00:32:44.474 [2024-07-24 23:18:16.786964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.474 [2024-07-24 23:18:16.787049] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.474 [2024-07-24 23:18:16.787068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.474 [2024-07-24 23:18:16.787078] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.474 [2024-07-24 23:18:16.787087] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.474 [2024-07-24 23:18:16.787105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.474 qpair failed and we were unable to recover it. 00:32:44.474 [2024-07-24 23:18:16.797007] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.474 [2024-07-24 23:18:16.797089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.474 [2024-07-24 23:18:16.797109] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.474 [2024-07-24 23:18:16.797118] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.474 [2024-07-24 23:18:16.797127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.474 [2024-07-24 23:18:16.797145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.474 qpair failed and we were unable to recover it. 00:32:44.474 [2024-07-24 23:18:16.807059] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.474 [2024-07-24 23:18:16.807149] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.474 [2024-07-24 23:18:16.807169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.474 [2024-07-24 23:18:16.807178] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.474 [2024-07-24 23:18:16.807188] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.474 [2024-07-24 23:18:16.807205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.474 qpair failed and we were unable to recover it. 00:32:44.474 [2024-07-24 23:18:16.817095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.474 [2024-07-24 23:18:16.817180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.474 [2024-07-24 23:18:16.817200] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.474 [2024-07-24 23:18:16.817210] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.474 [2024-07-24 23:18:16.817220] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.474 [2024-07-24 23:18:16.817237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.474 qpair failed and we were unable to recover it. 00:32:44.474 [2024-07-24 23:18:16.827134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.474 [2024-07-24 23:18:16.827216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.474 [2024-07-24 23:18:16.827235] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.474 [2024-07-24 23:18:16.827245] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.474 [2024-07-24 23:18:16.827254] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.474 [2024-07-24 23:18:16.827272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.474 qpair failed and we were unable to recover it. 00:32:44.474 [2024-07-24 23:18:16.837174] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.474 [2024-07-24 23:18:16.837253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.474 [2024-07-24 23:18:16.837272] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.474 [2024-07-24 23:18:16.837282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.474 [2024-07-24 23:18:16.837291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.474 [2024-07-24 23:18:16.837308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.474 qpair failed and we were unable to recover it. 00:32:44.474 [2024-07-24 23:18:16.847141] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.474 [2024-07-24 23:18:16.847221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.474 [2024-07-24 23:18:16.847241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.474 [2024-07-24 23:18:16.847254] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.474 [2024-07-24 23:18:16.847265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.474 [2024-07-24 23:18:16.847283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.474 qpair failed and we were unable to recover it. 00:32:44.474 [2024-07-24 23:18:16.857219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.474 [2024-07-24 23:18:16.857339] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.474 [2024-07-24 23:18:16.857359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.474 [2024-07-24 23:18:16.857369] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.474 [2024-07-24 23:18:16.857378] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.474 [2024-07-24 23:18:16.857396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.474 qpair failed and we were unable to recover it. 00:32:44.474 [2024-07-24 23:18:16.867253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.474 [2024-07-24 23:18:16.867337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.474 [2024-07-24 23:18:16.867357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.474 [2024-07-24 23:18:16.867367] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.474 [2024-07-24 23:18:16.867376] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.474 [2024-07-24 23:18:16.867392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.474 qpair failed and we were unable to recover it. 00:32:44.474 [2024-07-24 23:18:16.877289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.474 [2024-07-24 23:18:16.877371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.474 [2024-07-24 23:18:16.877390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.474 [2024-07-24 23:18:16.877399] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.474 [2024-07-24 23:18:16.877408] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.474 [2024-07-24 23:18:16.877425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.474 qpair failed and we were unable to recover it. 00:32:44.474 [2024-07-24 23:18:16.887298] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.474 [2024-07-24 23:18:16.887385] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.474 [2024-07-24 23:18:16.887404] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.474 [2024-07-24 23:18:16.887414] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.474 [2024-07-24 23:18:16.887423] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.474 [2024-07-24 23:18:16.887440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.474 qpair failed and we were unable to recover it. 00:32:44.474 [2024-07-24 23:18:16.897327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.474 [2024-07-24 23:18:16.897487] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.474 [2024-07-24 23:18:16.897506] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.474 [2024-07-24 23:18:16.897516] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.474 [2024-07-24 23:18:16.897525] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.474 [2024-07-24 23:18:16.897542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.474 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:16.907372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:16.907456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:16.907475] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:16.907484] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:16.907494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:16.907511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:16.917407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:16.917489] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:16.917508] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:16.917518] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:16.917527] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:16.917544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:16.927417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:16.927497] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:16.927516] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:16.927526] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:16.927536] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:16.927552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:16.937468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:16.937576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:16.937595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:16.937608] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:16.937618] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:16.937635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:16.947483] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:16.947566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:16.947585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:16.947595] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:16.947604] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:16.947622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:16.957450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:16.957543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:16.957562] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:16.957571] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:16.957581] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:16.957598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:16.967550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:16.967629] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:16.967647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:16.967657] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:16.967666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:16.967683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:16.977579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:16.977655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:16.977674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:16.977683] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:16.977693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:16.977709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:16.987602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:16.987685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:16.987704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:16.987718] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:16.987727] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:16.987744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:16.997634] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:16.997712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:16.997734] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:16.997744] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:16.997752] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:16.997769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:17.007656] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:17.007741] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:17.007760] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:17.007770] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:17.007779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:17.007796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:17.017695] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:17.017778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:17.017797] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:17.017807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:17.017815] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:17.017832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:17.027748] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:17.027833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:17.027851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:17.027864] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:17.027873] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:17.027891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:17.037765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:17.037843] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:17.037862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:17.037871] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:17.037881] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:17.037898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:17.047796] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:17.047874] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:17.047893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:17.047903] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:17.047913] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:17.047930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:17.057846] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:17.057954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:17.057972] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:17.057981] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:17.057990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:17.058007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:17.067826] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:17.067907] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:17.067926] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:17.067935] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:17.067944] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:17.067961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:17.077846] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:17.077931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:17.077949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:17.077959] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:17.077968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:17.077985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:17.087878] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:17.088036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:17.088055] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:17.088065] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:17.088075] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:17.088093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:17.097887] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:17.098011] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:17.098030] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:17.098040] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:17.098049] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:17.098066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:17.107947] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:17.108030] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:17.108049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:17.108059] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:17.108069] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:17.108086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:17.117933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:17.118014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:17.118037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.734 [2024-07-24 23:18:17.118047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.734 [2024-07-24 23:18:17.118056] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.734 [2024-07-24 23:18:17.118073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.734 qpair failed and we were unable to recover it. 00:32:44.734 [2024-07-24 23:18:17.128009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.734 [2024-07-24 23:18:17.128089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.734 [2024-07-24 23:18:17.128108] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.735 [2024-07-24 23:18:17.128118] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.735 [2024-07-24 23:18:17.128128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.735 [2024-07-24 23:18:17.128145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.735 qpair failed and we were unable to recover it. 00:32:44.735 [2024-07-24 23:18:17.138043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.735 [2024-07-24 23:18:17.138154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.735 [2024-07-24 23:18:17.138172] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.735 [2024-07-24 23:18:17.138182] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.735 [2024-07-24 23:18:17.138192] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.735 [2024-07-24 23:18:17.138208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.735 qpair failed and we were unable to recover it. 00:32:44.735 [2024-07-24 23:18:17.148048] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.735 [2024-07-24 23:18:17.148211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.735 [2024-07-24 23:18:17.148229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.735 [2024-07-24 23:18:17.148239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.735 [2024-07-24 23:18:17.148249] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.735 [2024-07-24 23:18:17.148266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.735 qpair failed and we were unable to recover it. 00:32:44.735 [2024-07-24 23:18:17.158092] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.735 [2024-07-24 23:18:17.158212] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.735 [2024-07-24 23:18:17.158230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.735 [2024-07-24 23:18:17.158239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.735 [2024-07-24 23:18:17.158248] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.735 [2024-07-24 23:18:17.158265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.735 qpair failed and we were unable to recover it. 00:32:44.994 [2024-07-24 23:18:17.168116] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.994 [2024-07-24 23:18:17.168195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.994 [2024-07-24 23:18:17.168214] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.994 [2024-07-24 23:18:17.168224] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.994 [2024-07-24 23:18:17.168233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.994 [2024-07-24 23:18:17.168250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.994 qpair failed and we were unable to recover it. 00:32:44.994 [2024-07-24 23:18:17.178159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.994 [2024-07-24 23:18:17.178239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.994 [2024-07-24 23:18:17.178257] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.994 [2024-07-24 23:18:17.178266] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.994 [2024-07-24 23:18:17.178275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.994 [2024-07-24 23:18:17.178292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.994 qpair failed and we were unable to recover it. 00:32:44.994 [2024-07-24 23:18:17.188177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.994 [2024-07-24 23:18:17.188262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.994 [2024-07-24 23:18:17.188281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.994 [2024-07-24 23:18:17.188290] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.994 [2024-07-24 23:18:17.188300] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.994 [2024-07-24 23:18:17.188317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.994 qpair failed and we were unable to recover it. 00:32:44.994 [2024-07-24 23:18:17.198246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.994 [2024-07-24 23:18:17.198358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.994 [2024-07-24 23:18:17.198377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.994 [2024-07-24 23:18:17.198387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.994 [2024-07-24 23:18:17.198396] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.994 [2024-07-24 23:18:17.198413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.994 qpair failed and we were unable to recover it. 00:32:44.994 [2024-07-24 23:18:17.208209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.994 [2024-07-24 23:18:17.208285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.994 [2024-07-24 23:18:17.208306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.994 [2024-07-24 23:18:17.208315] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.994 [2024-07-24 23:18:17.208325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.994 [2024-07-24 23:18:17.208341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.994 qpair failed and we were unable to recover it. 00:32:44.994 [2024-07-24 23:18:17.218247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.994 [2024-07-24 23:18:17.218413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.994 [2024-07-24 23:18:17.218431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.994 [2024-07-24 23:18:17.218441] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.994 [2024-07-24 23:18:17.218450] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.994 [2024-07-24 23:18:17.218467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.994 qpair failed and we were unable to recover it. 00:32:44.994 [2024-07-24 23:18:17.228276] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.994 [2024-07-24 23:18:17.228363] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.994 [2024-07-24 23:18:17.228381] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.994 [2024-07-24 23:18:17.228391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.994 [2024-07-24 23:18:17.228401] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.994 [2024-07-24 23:18:17.228418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.994 qpair failed and we were unable to recover it. 00:32:44.994 [2024-07-24 23:18:17.238310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.994 [2024-07-24 23:18:17.238474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.994 [2024-07-24 23:18:17.238492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.994 [2024-07-24 23:18:17.238502] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.994 [2024-07-24 23:18:17.238510] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.994 [2024-07-24 23:18:17.238527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.994 qpair failed and we were unable to recover it. 00:32:44.994 [2024-07-24 23:18:17.248351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.994 [2024-07-24 23:18:17.248426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.994 [2024-07-24 23:18:17.248445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.994 [2024-07-24 23:18:17.248455] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.994 [2024-07-24 23:18:17.248464] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.994 [2024-07-24 23:18:17.248484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.994 qpair failed and we were unable to recover it. 00:32:44.994 [2024-07-24 23:18:17.258376] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.994 [2024-07-24 23:18:17.258457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.994 [2024-07-24 23:18:17.258476] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.994 [2024-07-24 23:18:17.258486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.994 [2024-07-24 23:18:17.258495] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.994 [2024-07-24 23:18:17.258512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.994 qpair failed and we were unable to recover it. 00:32:44.994 [2024-07-24 23:18:17.268493] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.994 [2024-07-24 23:18:17.268577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.994 [2024-07-24 23:18:17.268596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.995 [2024-07-24 23:18:17.268605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.995 [2024-07-24 23:18:17.268615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.995 [2024-07-24 23:18:17.268632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.995 qpair failed and we were unable to recover it. 00:32:44.995 [2024-07-24 23:18:17.278408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.995 [2024-07-24 23:18:17.278572] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.995 [2024-07-24 23:18:17.278591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.995 [2024-07-24 23:18:17.278600] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.995 [2024-07-24 23:18:17.278610] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.995 [2024-07-24 23:18:17.278628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.995 qpair failed and we were unable to recover it. 00:32:44.995 [2024-07-24 23:18:17.288458] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.995 [2024-07-24 23:18:17.288572] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.995 [2024-07-24 23:18:17.288591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.995 [2024-07-24 23:18:17.288600] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.995 [2024-07-24 23:18:17.288609] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.995 [2024-07-24 23:18:17.288627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.995 qpair failed and we were unable to recover it. 00:32:44.995 [2024-07-24 23:18:17.298525] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.995 [2024-07-24 23:18:17.298604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.995 [2024-07-24 23:18:17.298626] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.995 [2024-07-24 23:18:17.298636] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.995 [2024-07-24 23:18:17.298645] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.995 [2024-07-24 23:18:17.298661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.995 qpair failed and we were unable to recover it. 00:32:44.995 [2024-07-24 23:18:17.308526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.995 [2024-07-24 23:18:17.308659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.995 [2024-07-24 23:18:17.308678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.995 [2024-07-24 23:18:17.308688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.995 [2024-07-24 23:18:17.308697] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.995 [2024-07-24 23:18:17.308718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.995 qpair failed and we were unable to recover it. 00:32:44.995 [2024-07-24 23:18:17.318552] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.995 [2024-07-24 23:18:17.318638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.995 [2024-07-24 23:18:17.318657] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.995 [2024-07-24 23:18:17.318667] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.995 [2024-07-24 23:18:17.318676] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.995 [2024-07-24 23:18:17.318694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.995 qpair failed and we were unable to recover it. 00:32:44.995 [2024-07-24 23:18:17.328489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.995 [2024-07-24 23:18:17.328571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.995 [2024-07-24 23:18:17.328590] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.995 [2024-07-24 23:18:17.328600] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.995 [2024-07-24 23:18:17.328609] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.995 [2024-07-24 23:18:17.328626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.995 qpair failed and we were unable to recover it. 00:32:44.995 [2024-07-24 23:18:17.338604] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.995 [2024-07-24 23:18:17.338684] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.995 [2024-07-24 23:18:17.338703] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.995 [2024-07-24 23:18:17.338712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.995 [2024-07-24 23:18:17.338725] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.995 [2024-07-24 23:18:17.338744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.995 qpair failed and we were unable to recover it. 00:32:44.995 [2024-07-24 23:18:17.348630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.995 [2024-07-24 23:18:17.348713] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.995 [2024-07-24 23:18:17.348736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.995 [2024-07-24 23:18:17.348746] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.995 [2024-07-24 23:18:17.348755] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.995 [2024-07-24 23:18:17.348772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.995 qpair failed and we were unable to recover it. 00:32:44.995 [2024-07-24 23:18:17.358660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.995 [2024-07-24 23:18:17.358742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.995 [2024-07-24 23:18:17.358762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.995 [2024-07-24 23:18:17.358771] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.995 [2024-07-24 23:18:17.358780] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.995 [2024-07-24 23:18:17.358797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.995 qpair failed and we were unable to recover it. 00:32:44.995 [2024-07-24 23:18:17.368623] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.995 [2024-07-24 23:18:17.368711] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.995 [2024-07-24 23:18:17.368733] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.995 [2024-07-24 23:18:17.368743] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.995 [2024-07-24 23:18:17.368753] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.995 [2024-07-24 23:18:17.368770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.995 qpair failed and we were unable to recover it. 00:32:44.995 [2024-07-24 23:18:17.378697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.995 [2024-07-24 23:18:17.378786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.995 [2024-07-24 23:18:17.378805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.995 [2024-07-24 23:18:17.378815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.995 [2024-07-24 23:18:17.378824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.995 [2024-07-24 23:18:17.378842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.995 qpair failed and we were unable to recover it. 00:32:44.995 [2024-07-24 23:18:17.388710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.995 [2024-07-24 23:18:17.388795] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.995 [2024-07-24 23:18:17.388817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.995 [2024-07-24 23:18:17.388827] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.995 [2024-07-24 23:18:17.388836] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.995 [2024-07-24 23:18:17.388854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.995 qpair failed and we were unable to recover it. 00:32:44.995 [2024-07-24 23:18:17.398775] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.995 [2024-07-24 23:18:17.398854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.995 [2024-07-24 23:18:17.398873] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.995 [2024-07-24 23:18:17.398882] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.995 [2024-07-24 23:18:17.398892] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.995 [2024-07-24 23:18:17.398909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.996 qpair failed and we were unable to recover it. 00:32:44.996 [2024-07-24 23:18:17.408802] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.996 [2024-07-24 23:18:17.408885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.996 [2024-07-24 23:18:17.408904] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.996 [2024-07-24 23:18:17.408914] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.996 [2024-07-24 23:18:17.408924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.996 [2024-07-24 23:18:17.408941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.996 qpair failed and we were unable to recover it. 00:32:44.996 [2024-07-24 23:18:17.418830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.996 [2024-07-24 23:18:17.418908] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.996 [2024-07-24 23:18:17.418926] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.996 [2024-07-24 23:18:17.418936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.996 [2024-07-24 23:18:17.418946] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:44.996 [2024-07-24 23:18:17.418963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.996 qpair failed and we were unable to recover it. 00:32:45.255 [2024-07-24 23:18:17.428854] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.255 [2024-07-24 23:18:17.428966] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.255 [2024-07-24 23:18:17.428985] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.255 [2024-07-24 23:18:17.428994] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.255 [2024-07-24 23:18:17.429004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.255 [2024-07-24 23:18:17.429024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.255 qpair failed and we were unable to recover it. 00:32:45.255 [2024-07-24 23:18:17.438897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.255 [2024-07-24 23:18:17.439022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.255 [2024-07-24 23:18:17.439041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.255 [2024-07-24 23:18:17.439050] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.255 [2024-07-24 23:18:17.439060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.255 [2024-07-24 23:18:17.439076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.255 qpair failed and we were unable to recover it. 00:32:45.255 [2024-07-24 23:18:17.448902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.255 [2024-07-24 23:18:17.448978] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.255 [2024-07-24 23:18:17.448996] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.255 [2024-07-24 23:18:17.449006] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.255 [2024-07-24 23:18:17.449015] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.255 [2024-07-24 23:18:17.449032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.255 qpair failed and we were unable to recover it. 00:32:45.255 [2024-07-24 23:18:17.458955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.255 [2024-07-24 23:18:17.459033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.255 [2024-07-24 23:18:17.459052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.255 [2024-07-24 23:18:17.459062] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.255 [2024-07-24 23:18:17.459071] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.255 [2024-07-24 23:18:17.459089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.255 qpair failed and we were unable to recover it. 00:32:45.255 [2024-07-24 23:18:17.468990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.255 [2024-07-24 23:18:17.469068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.255 [2024-07-24 23:18:17.469087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.255 [2024-07-24 23:18:17.469096] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.255 [2024-07-24 23:18:17.469105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.255 [2024-07-24 23:18:17.469122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.255 qpair failed and we were unable to recover it. 00:32:45.255 [2024-07-24 23:18:17.478995] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.255 [2024-07-24 23:18:17.479081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.255 [2024-07-24 23:18:17.479104] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.255 [2024-07-24 23:18:17.479114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.255 [2024-07-24 23:18:17.479122] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.255 [2024-07-24 23:18:17.479139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.255 qpair failed and we were unable to recover it. 00:32:45.255 [2024-07-24 23:18:17.489012] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.255 [2024-07-24 23:18:17.489094] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.255 [2024-07-24 23:18:17.489112] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.255 [2024-07-24 23:18:17.489122] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.255 [2024-07-24 23:18:17.489131] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.255 [2024-07-24 23:18:17.489148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.255 qpair failed and we were unable to recover it. 00:32:45.255 [2024-07-24 23:18:17.499043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.255 [2024-07-24 23:18:17.499121] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.255 [2024-07-24 23:18:17.499140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.255 [2024-07-24 23:18:17.499150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.255 [2024-07-24 23:18:17.499159] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.255 [2024-07-24 23:18:17.499175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.255 qpair failed and we were unable to recover it. 00:32:45.255 [2024-07-24 23:18:17.509091] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.255 [2024-07-24 23:18:17.509170] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.255 [2024-07-24 23:18:17.509188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.255 [2024-07-24 23:18:17.509198] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.255 [2024-07-24 23:18:17.509207] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.255 [2024-07-24 23:18:17.509224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.255 qpair failed and we were unable to recover it. 00:32:45.255 [2024-07-24 23:18:17.519121] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.255 [2024-07-24 23:18:17.519197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.255 [2024-07-24 23:18:17.519216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.255 [2024-07-24 23:18:17.519225] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.255 [2024-07-24 23:18:17.519237] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.255 [2024-07-24 23:18:17.519254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.255 qpair failed and we were unable to recover it. 00:32:45.255 [2024-07-24 23:18:17.529130] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.255 [2024-07-24 23:18:17.529211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.255 [2024-07-24 23:18:17.529230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.255 [2024-07-24 23:18:17.529240] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.255 [2024-07-24 23:18:17.529249] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.255 [2024-07-24 23:18:17.529266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.255 qpair failed and we were unable to recover it. 00:32:45.255 [2024-07-24 23:18:17.539176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.255 [2024-07-24 23:18:17.539256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.255 [2024-07-24 23:18:17.539274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.255 [2024-07-24 23:18:17.539284] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.255 [2024-07-24 23:18:17.539292] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.255 [2024-07-24 23:18:17.539308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.255 qpair failed and we were unable to recover it. 00:32:45.255 [2024-07-24 23:18:17.549183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.255 [2024-07-24 23:18:17.549263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.255 [2024-07-24 23:18:17.549283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.255 [2024-07-24 23:18:17.549292] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.255 [2024-07-24 23:18:17.549301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.256 [2024-07-24 23:18:17.549318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.256 qpair failed and we were unable to recover it. 00:32:45.256 [2024-07-24 23:18:17.559215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.256 [2024-07-24 23:18:17.559297] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.256 [2024-07-24 23:18:17.559315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.256 [2024-07-24 23:18:17.559325] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.256 [2024-07-24 23:18:17.559334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.256 [2024-07-24 23:18:17.559351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.256 qpair failed and we were unable to recover it. 00:32:45.256 [2024-07-24 23:18:17.569244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.256 [2024-07-24 23:18:17.569320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.256 [2024-07-24 23:18:17.569342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.256 [2024-07-24 23:18:17.569352] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.256 [2024-07-24 23:18:17.569361] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.256 [2024-07-24 23:18:17.569377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.256 qpair failed and we were unable to recover it. 00:32:45.256 [2024-07-24 23:18:17.579278] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.256 [2024-07-24 23:18:17.579357] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.256 [2024-07-24 23:18:17.579375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.256 [2024-07-24 23:18:17.579385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.256 [2024-07-24 23:18:17.579394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.256 [2024-07-24 23:18:17.579410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.256 qpair failed and we were unable to recover it. 00:32:45.256 [2024-07-24 23:18:17.589330] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.256 [2024-07-24 23:18:17.589414] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.256 [2024-07-24 23:18:17.589432] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.256 [2024-07-24 23:18:17.589442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.256 [2024-07-24 23:18:17.589450] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.256 [2024-07-24 23:18:17.589467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.256 qpair failed and we were unable to recover it. 00:32:45.256 [2024-07-24 23:18:17.599362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.256 [2024-07-24 23:18:17.599444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.256 [2024-07-24 23:18:17.599462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.256 [2024-07-24 23:18:17.599472] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.256 [2024-07-24 23:18:17.599481] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.256 [2024-07-24 23:18:17.599498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.256 qpair failed and we were unable to recover it. 00:32:45.256 [2024-07-24 23:18:17.609380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.256 [2024-07-24 23:18:17.609461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.256 [2024-07-24 23:18:17.609479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.256 [2024-07-24 23:18:17.609489] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.256 [2024-07-24 23:18:17.609501] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.256 [2024-07-24 23:18:17.609518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.256 qpair failed and we were unable to recover it. 00:32:45.256 [2024-07-24 23:18:17.619400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.256 [2024-07-24 23:18:17.619481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.256 [2024-07-24 23:18:17.619499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.256 [2024-07-24 23:18:17.619509] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.256 [2024-07-24 23:18:17.619518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.256 [2024-07-24 23:18:17.619534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.256 qpair failed and we were unable to recover it. 00:32:45.256 [2024-07-24 23:18:17.629442] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.256 [2024-07-24 23:18:17.629519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.256 [2024-07-24 23:18:17.629537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.256 [2024-07-24 23:18:17.629547] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.256 [2024-07-24 23:18:17.629556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.256 [2024-07-24 23:18:17.629572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.256 qpair failed and we were unable to recover it. 00:32:45.256 [2024-07-24 23:18:17.639499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.256 [2024-07-24 23:18:17.639576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.256 [2024-07-24 23:18:17.639594] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.256 [2024-07-24 23:18:17.639603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.256 [2024-07-24 23:18:17.639612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.256 [2024-07-24 23:18:17.639629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.256 qpair failed and we were unable to recover it. 00:32:45.256 [2024-07-24 23:18:17.649489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.256 [2024-07-24 23:18:17.649570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.256 [2024-07-24 23:18:17.649588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.256 [2024-07-24 23:18:17.649598] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.256 [2024-07-24 23:18:17.649606] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.256 [2024-07-24 23:18:17.649623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.256 qpair failed and we were unable to recover it. 00:32:45.256 [2024-07-24 23:18:17.659543] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.256 [2024-07-24 23:18:17.659624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.256 [2024-07-24 23:18:17.659643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.256 [2024-07-24 23:18:17.659653] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.256 [2024-07-24 23:18:17.659662] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.256 [2024-07-24 23:18:17.659679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.256 qpair failed and we were unable to recover it. 00:32:45.256 [2024-07-24 23:18:17.669597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.256 [2024-07-24 23:18:17.669676] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.256 [2024-07-24 23:18:17.669694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.256 [2024-07-24 23:18:17.669704] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.256 [2024-07-24 23:18:17.669718] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.256 [2024-07-24 23:18:17.669738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.256 qpair failed and we were unable to recover it. 00:32:45.256 [2024-07-24 23:18:17.679576] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.256 [2024-07-24 23:18:17.679659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.256 [2024-07-24 23:18:17.679677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.256 [2024-07-24 23:18:17.679687] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.256 [2024-07-24 23:18:17.679696] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.256 [2024-07-24 23:18:17.679712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.256 qpair failed and we were unable to recover it. 00:32:45.515 [2024-07-24 23:18:17.689626] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.515 [2024-07-24 23:18:17.689709] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.516 [2024-07-24 23:18:17.689731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.516 [2024-07-24 23:18:17.689741] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.516 [2024-07-24 23:18:17.689750] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.516 [2024-07-24 23:18:17.689767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.516 qpair failed and we were unable to recover it. 00:32:45.516 [2024-07-24 23:18:17.699645] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.516 [2024-07-24 23:18:17.699761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.516 [2024-07-24 23:18:17.699778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.516 [2024-07-24 23:18:17.699788] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.516 [2024-07-24 23:18:17.699799] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.516 [2024-07-24 23:18:17.699816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.516 qpair failed and we were unable to recover it. 00:32:45.516 [2024-07-24 23:18:17.709650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.516 [2024-07-24 23:18:17.709737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.516 [2024-07-24 23:18:17.709757] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.516 [2024-07-24 23:18:17.709767] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.516 [2024-07-24 23:18:17.709777] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.516 [2024-07-24 23:18:17.709794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.516 qpair failed and we were unable to recover it. 00:32:45.516 [2024-07-24 23:18:17.719699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.516 [2024-07-24 23:18:17.719776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.516 [2024-07-24 23:18:17.719795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.516 [2024-07-24 23:18:17.719805] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.516 [2024-07-24 23:18:17.719815] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.516 [2024-07-24 23:18:17.719832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.516 qpair failed and we were unable to recover it. 00:32:45.516 [2024-07-24 23:18:17.729741] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.516 [2024-07-24 23:18:17.729824] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.516 [2024-07-24 23:18:17.729842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.516 [2024-07-24 23:18:17.729852] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.516 [2024-07-24 23:18:17.729861] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.516 [2024-07-24 23:18:17.729878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.516 qpair failed and we were unable to recover it. 00:32:45.516 [2024-07-24 23:18:17.739750] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.516 [2024-07-24 23:18:17.739829] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.516 [2024-07-24 23:18:17.739848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.516 [2024-07-24 23:18:17.739858] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.516 [2024-07-24 23:18:17.739867] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.516 [2024-07-24 23:18:17.739884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.516 qpair failed and we were unable to recover it. 00:32:45.516 [2024-07-24 23:18:17.749717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.516 [2024-07-24 23:18:17.749800] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.516 [2024-07-24 23:18:17.749819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.516 [2024-07-24 23:18:17.749829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.516 [2024-07-24 23:18:17.749838] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.516 [2024-07-24 23:18:17.749855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.516 qpair failed and we were unable to recover it. 00:32:45.516 [2024-07-24 23:18:17.759817] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.516 [2024-07-24 23:18:17.759923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.516 [2024-07-24 23:18:17.759943] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.516 [2024-07-24 23:18:17.759952] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.516 [2024-07-24 23:18:17.759962] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.516 [2024-07-24 23:18:17.759979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.516 qpair failed and we were unable to recover it. 00:32:45.516 [2024-07-24 23:18:17.769869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.516 [2024-07-24 23:18:17.769944] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.516 [2024-07-24 23:18:17.769963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.516 [2024-07-24 23:18:17.769973] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.516 [2024-07-24 23:18:17.769982] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.516 [2024-07-24 23:18:17.769999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.516 qpair failed and we were unable to recover it. 00:32:45.516 [2024-07-24 23:18:17.779911] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.516 [2024-07-24 23:18:17.780019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.516 [2024-07-24 23:18:17.780037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.516 [2024-07-24 23:18:17.780047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.516 [2024-07-24 23:18:17.780055] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.516 [2024-07-24 23:18:17.780072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.516 qpair failed and we were unable to recover it. 00:32:45.516 [2024-07-24 23:18:17.789916] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.516 [2024-07-24 23:18:17.789993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.516 [2024-07-24 23:18:17.790012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.516 [2024-07-24 23:18:17.790022] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.516 [2024-07-24 23:18:17.790034] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.516 [2024-07-24 23:18:17.790051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.516 qpair failed and we were unable to recover it. 00:32:45.516 [2024-07-24 23:18:17.799953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.516 [2024-07-24 23:18:17.800043] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.516 [2024-07-24 23:18:17.800062] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.516 [2024-07-24 23:18:17.800071] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.516 [2024-07-24 23:18:17.800081] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.516 [2024-07-24 23:18:17.800098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.516 qpair failed and we were unable to recover it. 00:32:45.516 [2024-07-24 23:18:17.809999] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.516 [2024-07-24 23:18:17.810073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.516 [2024-07-24 23:18:17.810092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.516 [2024-07-24 23:18:17.810101] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.516 [2024-07-24 23:18:17.810110] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.516 [2024-07-24 23:18:17.810127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.516 qpair failed and we were unable to recover it. 00:32:45.516 [2024-07-24 23:18:17.820009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.516 [2024-07-24 23:18:17.820086] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.516 [2024-07-24 23:18:17.820105] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.516 [2024-07-24 23:18:17.820114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.516 [2024-07-24 23:18:17.820124] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.517 [2024-07-24 23:18:17.820141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.517 qpair failed and we were unable to recover it. 00:32:45.517 [2024-07-24 23:18:17.830027] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.517 [2024-07-24 23:18:17.830106] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.517 [2024-07-24 23:18:17.830124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.517 [2024-07-24 23:18:17.830134] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.517 [2024-07-24 23:18:17.830143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.517 [2024-07-24 23:18:17.830160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.517 qpair failed and we were unable to recover it. 00:32:45.517 [2024-07-24 23:18:17.840188] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.517 [2024-07-24 23:18:17.840283] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.517 [2024-07-24 23:18:17.840301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.517 [2024-07-24 23:18:17.840311] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.517 [2024-07-24 23:18:17.840320] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.517 [2024-07-24 23:18:17.840337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.517 qpair failed and we were unable to recover it. 00:32:45.517 [2024-07-24 23:18:17.850123] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.517 [2024-07-24 23:18:17.850200] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.517 [2024-07-24 23:18:17.850219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.517 [2024-07-24 23:18:17.850229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.517 [2024-07-24 23:18:17.850238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.517 [2024-07-24 23:18:17.850255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.517 qpair failed and we were unable to recover it. 00:32:45.517 [2024-07-24 23:18:17.860162] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.517 [2024-07-24 23:18:17.860246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.517 [2024-07-24 23:18:17.860264] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.517 [2024-07-24 23:18:17.860274] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.517 [2024-07-24 23:18:17.860282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.517 [2024-07-24 23:18:17.860299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.517 qpair failed and we were unable to recover it. 00:32:45.517 [2024-07-24 23:18:17.870219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.517 [2024-07-24 23:18:17.870294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.517 [2024-07-24 23:18:17.870312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.517 [2024-07-24 23:18:17.870322] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.517 [2024-07-24 23:18:17.870331] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.517 [2024-07-24 23:18:17.870348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.517 qpair failed and we were unable to recover it. 00:32:45.517 [2024-07-24 23:18:17.880121] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.517 [2024-07-24 23:18:17.880208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.517 [2024-07-24 23:18:17.880227] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.517 [2024-07-24 23:18:17.880241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.517 [2024-07-24 23:18:17.880249] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.517 [2024-07-24 23:18:17.880266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.517 qpair failed and we were unable to recover it. 00:32:45.517 [2024-07-24 23:18:17.890199] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.517 [2024-07-24 23:18:17.890292] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.517 [2024-07-24 23:18:17.890310] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.517 [2024-07-24 23:18:17.890320] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.517 [2024-07-24 23:18:17.890329] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.517 [2024-07-24 23:18:17.890347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.517 qpair failed and we were unable to recover it. 00:32:45.517 [2024-07-24 23:18:17.900297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.517 [2024-07-24 23:18:17.900402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.517 [2024-07-24 23:18:17.900420] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.517 [2024-07-24 23:18:17.900430] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.517 [2024-07-24 23:18:17.900439] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.517 [2024-07-24 23:18:17.900456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.517 qpair failed and we were unable to recover it. 00:32:45.517 [2024-07-24 23:18:17.910263] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.517 [2024-07-24 23:18:17.910346] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.517 [2024-07-24 23:18:17.910365] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.517 [2024-07-24 23:18:17.910374] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.517 [2024-07-24 23:18:17.910384] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.517 [2024-07-24 23:18:17.910401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.517 qpair failed and we were unable to recover it. 00:32:45.517 [2024-07-24 23:18:17.920281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.517 [2024-07-24 23:18:17.920363] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.517 [2024-07-24 23:18:17.920382] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.517 [2024-07-24 23:18:17.920391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.517 [2024-07-24 23:18:17.920400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.517 [2024-07-24 23:18:17.920417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.517 qpair failed and we were unable to recover it. 00:32:45.517 [2024-07-24 23:18:17.930307] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.517 [2024-07-24 23:18:17.930388] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.517 [2024-07-24 23:18:17.930407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.517 [2024-07-24 23:18:17.930417] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.517 [2024-07-24 23:18:17.930426] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.517 [2024-07-24 23:18:17.930442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.517 qpair failed and we were unable to recover it. 00:32:45.517 [2024-07-24 23:18:17.940359] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.517 [2024-07-24 23:18:17.940440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.517 [2024-07-24 23:18:17.940459] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.517 [2024-07-24 23:18:17.940468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.517 [2024-07-24 23:18:17.940478] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.517 [2024-07-24 23:18:17.940495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.517 qpair failed and we were unable to recover it. 00:32:45.777 [2024-07-24 23:18:17.950376] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.777 [2024-07-24 23:18:17.950464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.777 [2024-07-24 23:18:17.950482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.777 [2024-07-24 23:18:17.950492] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.777 [2024-07-24 23:18:17.950502] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.777 [2024-07-24 23:18:17.950518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.777 qpair failed and we were unable to recover it. 00:32:45.777 [2024-07-24 23:18:17.960445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.777 [2024-07-24 23:18:17.960530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.777 [2024-07-24 23:18:17.960548] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.777 [2024-07-24 23:18:17.960558] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.777 [2024-07-24 23:18:17.960567] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.777 [2024-07-24 23:18:17.960584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.777 qpair failed and we were unable to recover it. 00:32:45.777 [2024-07-24 23:18:17.970420] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.777 [2024-07-24 23:18:17.970507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.777 [2024-07-24 23:18:17.970526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.777 [2024-07-24 23:18:17.970540] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.777 [2024-07-24 23:18:17.970549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.777 [2024-07-24 23:18:17.970566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.777 qpair failed and we were unable to recover it. 00:32:45.777 [2024-07-24 23:18:17.980398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.777 [2024-07-24 23:18:17.980475] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.777 [2024-07-24 23:18:17.980494] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.777 [2024-07-24 23:18:17.980503] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.777 [2024-07-24 23:18:17.980513] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.777 [2024-07-24 23:18:17.980530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.777 qpair failed and we were unable to recover it. 00:32:45.777 [2024-07-24 23:18:17.990496] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.777 [2024-07-24 23:18:17.990577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.777 [2024-07-24 23:18:17.990596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.777 [2024-07-24 23:18:17.990606] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.777 [2024-07-24 23:18:17.990614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.777 [2024-07-24 23:18:17.990632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.777 qpair failed and we were unable to recover it. 00:32:45.777 [2024-07-24 23:18:18.000524] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.777 [2024-07-24 23:18:18.000607] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.777 [2024-07-24 23:18:18.000625] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.777 [2024-07-24 23:18:18.000635] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.777 [2024-07-24 23:18:18.000645] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.777 [2024-07-24 23:18:18.000662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.777 qpair failed and we were unable to recover it. 00:32:45.777 [2024-07-24 23:18:18.010528] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.777 [2024-07-24 23:18:18.010611] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.777 [2024-07-24 23:18:18.010630] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.777 [2024-07-24 23:18:18.010640] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.777 [2024-07-24 23:18:18.010650] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.777 [2024-07-24 23:18:18.010667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.777 qpair failed and we were unable to recover it. 00:32:45.777 [2024-07-24 23:18:18.020574] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.777 [2024-07-24 23:18:18.020684] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.777 [2024-07-24 23:18:18.020703] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.777 [2024-07-24 23:18:18.020712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.777 [2024-07-24 23:18:18.020725] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.777 [2024-07-24 23:18:18.020742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.777 qpair failed and we were unable to recover it. 00:32:45.777 [2024-07-24 23:18:18.030555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.777 [2024-07-24 23:18:18.030642] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.777 [2024-07-24 23:18:18.030660] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.777 [2024-07-24 23:18:18.030670] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.777 [2024-07-24 23:18:18.030680] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.777 [2024-07-24 23:18:18.030697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.777 qpair failed and we were unable to recover it. 00:32:45.777 [2024-07-24 23:18:18.040616] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.777 [2024-07-24 23:18:18.040705] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.777 [2024-07-24 23:18:18.040728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.777 [2024-07-24 23:18:18.040738] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.777 [2024-07-24 23:18:18.040747] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.777 [2024-07-24 23:18:18.040764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.777 qpair failed and we were unable to recover it. 00:32:45.777 [2024-07-24 23:18:18.050677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.777 [2024-07-24 23:18:18.050762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.777 [2024-07-24 23:18:18.050781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.777 [2024-07-24 23:18:18.050791] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.777 [2024-07-24 23:18:18.050800] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.777 [2024-07-24 23:18:18.050818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.777 qpair failed and we were unable to recover it. 00:32:45.777 [2024-07-24 23:18:18.060754] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.777 [2024-07-24 23:18:18.060839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.777 [2024-07-24 23:18:18.060858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.777 [2024-07-24 23:18:18.060872] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.777 [2024-07-24 23:18:18.060881] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.777 [2024-07-24 23:18:18.060899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.777 qpair failed and we were unable to recover it. 00:32:45.777 [2024-07-24 23:18:18.070689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.777 [2024-07-24 23:18:18.070777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.777 [2024-07-24 23:18:18.070796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.777 [2024-07-24 23:18:18.070806] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.777 [2024-07-24 23:18:18.070814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.777 [2024-07-24 23:18:18.070831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.777 qpair failed and we were unable to recover it. 00:32:45.777 [2024-07-24 23:18:18.080768] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.778 [2024-07-24 23:18:18.080853] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.778 [2024-07-24 23:18:18.080872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.778 [2024-07-24 23:18:18.080882] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.778 [2024-07-24 23:18:18.080891] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.778 [2024-07-24 23:18:18.080908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-07-24 23:18:18.090812] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.778 [2024-07-24 23:18:18.090896] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.778 [2024-07-24 23:18:18.090915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.778 [2024-07-24 23:18:18.090924] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.778 [2024-07-24 23:18:18.090934] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.778 [2024-07-24 23:18:18.090951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-07-24 23:18:18.100829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.778 [2024-07-24 23:18:18.100912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.778 [2024-07-24 23:18:18.100931] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.778 [2024-07-24 23:18:18.100941] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.778 [2024-07-24 23:18:18.100951] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.778 [2024-07-24 23:18:18.100968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-07-24 23:18:18.110883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.778 [2024-07-24 23:18:18.110966] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.778 [2024-07-24 23:18:18.110984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.778 [2024-07-24 23:18:18.110994] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.778 [2024-07-24 23:18:18.111004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.778 [2024-07-24 23:18:18.111021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-07-24 23:18:18.120866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.778 [2024-07-24 23:18:18.120948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.778 [2024-07-24 23:18:18.120967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.778 [2024-07-24 23:18:18.120977] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.778 [2024-07-24 23:18:18.120986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.778 [2024-07-24 23:18:18.121003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-07-24 23:18:18.130838] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.778 [2024-07-24 23:18:18.130914] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.778 [2024-07-24 23:18:18.130933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.778 [2024-07-24 23:18:18.130943] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.778 [2024-07-24 23:18:18.130952] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.778 [2024-07-24 23:18:18.130969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-07-24 23:18:18.140878] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.778 [2024-07-24 23:18:18.140960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.778 [2024-07-24 23:18:18.140979] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.778 [2024-07-24 23:18:18.140989] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.778 [2024-07-24 23:18:18.140999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.778 [2024-07-24 23:18:18.141016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-07-24 23:18:18.150906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.778 [2024-07-24 23:18:18.150981] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.778 [2024-07-24 23:18:18.151000] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.778 [2024-07-24 23:18:18.151014] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.778 [2024-07-24 23:18:18.151023] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.778 [2024-07-24 23:18:18.151040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-07-24 23:18:18.160964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.778 [2024-07-24 23:18:18.161042] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.778 [2024-07-24 23:18:18.161060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.778 [2024-07-24 23:18:18.161070] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.778 [2024-07-24 23:18:18.161079] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.778 [2024-07-24 23:18:18.161096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-07-24 23:18:18.171075] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.778 [2024-07-24 23:18:18.171157] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.778 [2024-07-24 23:18:18.171176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.778 [2024-07-24 23:18:18.171186] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.778 [2024-07-24 23:18:18.171195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.778 [2024-07-24 23:18:18.171213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-07-24 23:18:18.181054] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.778 [2024-07-24 23:18:18.181132] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.778 [2024-07-24 23:18:18.181151] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.778 [2024-07-24 23:18:18.181160] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.778 [2024-07-24 23:18:18.181170] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.778 [2024-07-24 23:18:18.181187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-07-24 23:18:18.191026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.778 [2024-07-24 23:18:18.191101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.778 [2024-07-24 23:18:18.191120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.778 [2024-07-24 23:18:18.191129] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.778 [2024-07-24 23:18:18.191139] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.778 [2024-07-24 23:18:18.191156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-07-24 23:18:18.201080] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.778 [2024-07-24 23:18:18.201163] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.778 [2024-07-24 23:18:18.201182] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.778 [2024-07-24 23:18:18.201191] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.778 [2024-07-24 23:18:18.201201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:45.778 [2024-07-24 23:18:18.201217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.778 qpair failed and we were unable to recover it. 00:32:46.040 [2024-07-24 23:18:18.211112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.040 [2024-07-24 23:18:18.211195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.040 [2024-07-24 23:18:18.211214] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.040 [2024-07-24 23:18:18.211223] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.040 [2024-07-24 23:18:18.211233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.040 [2024-07-24 23:18:18.211250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.040 qpair failed and we were unable to recover it. 00:32:46.040 [2024-07-24 23:18:18.221169] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.040 [2024-07-24 23:18:18.221249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.040 [2024-07-24 23:18:18.221267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.040 [2024-07-24 23:18:18.221277] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.040 [2024-07-24 23:18:18.221286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.040 [2024-07-24 23:18:18.221304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.040 qpair failed and we were unable to recover it. 00:32:46.040 [2024-07-24 23:18:18.231183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.040 [2024-07-24 23:18:18.231267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.040 [2024-07-24 23:18:18.231286] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.040 [2024-07-24 23:18:18.231295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.040 [2024-07-24 23:18:18.231305] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.040 [2024-07-24 23:18:18.231322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.040 qpair failed and we were unable to recover it. 00:32:46.040 [2024-07-24 23:18:18.241182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.040 [2024-07-24 23:18:18.241267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.040 [2024-07-24 23:18:18.241290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.040 [2024-07-24 23:18:18.241299] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.040 [2024-07-24 23:18:18.241309] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.040 [2024-07-24 23:18:18.241326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.040 qpair failed and we were unable to recover it. 00:32:46.040 [2024-07-24 23:18:18.251310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.040 [2024-07-24 23:18:18.251394] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.040 [2024-07-24 23:18:18.251413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.040 [2024-07-24 23:18:18.251422] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.040 [2024-07-24 23:18:18.251432] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.040 [2024-07-24 23:18:18.251449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.040 qpair failed and we were unable to recover it. 00:32:46.040 [2024-07-24 23:18:18.261187] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.041 [2024-07-24 23:18:18.261268] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.041 [2024-07-24 23:18:18.261286] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.041 [2024-07-24 23:18:18.261296] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.041 [2024-07-24 23:18:18.261305] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.041 [2024-07-24 23:18:18.261322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.041 qpair failed and we were unable to recover it. 00:32:46.041 [2024-07-24 23:18:18.271236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.041 [2024-07-24 23:18:18.271313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.041 [2024-07-24 23:18:18.271331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.041 [2024-07-24 23:18:18.271341] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.041 [2024-07-24 23:18:18.271349] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.041 [2024-07-24 23:18:18.271366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.041 qpair failed and we were unable to recover it. 00:32:46.041 [2024-07-24 23:18:18.281304] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.041 [2024-07-24 23:18:18.281384] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.041 [2024-07-24 23:18:18.281403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.041 [2024-07-24 23:18:18.281412] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.041 [2024-07-24 23:18:18.281422] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.041 [2024-07-24 23:18:18.281439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.041 qpair failed and we were unable to recover it. 00:32:46.041 [2024-07-24 23:18:18.291328] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.041 [2024-07-24 23:18:18.291406] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.041 [2024-07-24 23:18:18.291425] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.041 [2024-07-24 23:18:18.291434] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.041 [2024-07-24 23:18:18.291444] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.041 [2024-07-24 23:18:18.291461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.041 qpair failed and we were unable to recover it. 00:32:46.041 [2024-07-24 23:18:18.301361] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.041 [2024-07-24 23:18:18.301484] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.041 [2024-07-24 23:18:18.301503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.041 [2024-07-24 23:18:18.301512] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.041 [2024-07-24 23:18:18.301522] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.041 [2024-07-24 23:18:18.301540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.041 qpair failed and we were unable to recover it. 00:32:46.041 [2024-07-24 23:18:18.311338] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.041 [2024-07-24 23:18:18.311422] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.041 [2024-07-24 23:18:18.311441] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.041 [2024-07-24 23:18:18.311450] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.041 [2024-07-24 23:18:18.311459] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.041 [2024-07-24 23:18:18.311476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.041 qpair failed and we were unable to recover it. 00:32:46.041 [2024-07-24 23:18:18.321449] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.041 [2024-07-24 23:18:18.321527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.041 [2024-07-24 23:18:18.321546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.041 [2024-07-24 23:18:18.321556] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.041 [2024-07-24 23:18:18.321565] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.041 [2024-07-24 23:18:18.321581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.041 qpair failed and we were unable to recover it. 00:32:46.041 [2024-07-24 23:18:18.331486] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.041 [2024-07-24 23:18:18.331573] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.041 [2024-07-24 23:18:18.331595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.041 [2024-07-24 23:18:18.331605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.041 [2024-07-24 23:18:18.331614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.041 [2024-07-24 23:18:18.331632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.041 qpair failed and we were unable to recover it. 00:32:46.041 [2024-07-24 23:18:18.341421] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.041 [2024-07-24 23:18:18.341504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.041 [2024-07-24 23:18:18.341523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.041 [2024-07-24 23:18:18.341533] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.041 [2024-07-24 23:18:18.341543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.041 [2024-07-24 23:18:18.341559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.041 qpair failed and we were unable to recover it. 00:32:46.041 [2024-07-24 23:18:18.351538] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.041 [2024-07-24 23:18:18.351619] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.041 [2024-07-24 23:18:18.351638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.041 [2024-07-24 23:18:18.351648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.041 [2024-07-24 23:18:18.351658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.041 [2024-07-24 23:18:18.351675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.041 qpair failed and we were unable to recover it. 00:32:46.041 [2024-07-24 23:18:18.361547] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.041 [2024-07-24 23:18:18.361629] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.041 [2024-07-24 23:18:18.361649] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.041 [2024-07-24 23:18:18.361658] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.041 [2024-07-24 23:18:18.361668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.041 [2024-07-24 23:18:18.361684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.041 qpair failed and we were unable to recover it. 00:32:46.041 [2024-07-24 23:18:18.371558] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.041 [2024-07-24 23:18:18.371634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.041 [2024-07-24 23:18:18.371652] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.041 [2024-07-24 23:18:18.371662] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.041 [2024-07-24 23:18:18.371672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.041 [2024-07-24 23:18:18.371689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.041 qpair failed and we were unable to recover it. 00:32:46.041 [2024-07-24 23:18:18.381550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.041 [2024-07-24 23:18:18.381633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.041 [2024-07-24 23:18:18.381652] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.041 [2024-07-24 23:18:18.381661] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.041 [2024-07-24 23:18:18.381671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.041 [2024-07-24 23:18:18.381688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.041 qpair failed and we were unable to recover it. 00:32:46.041 [2024-07-24 23:18:18.391602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.042 [2024-07-24 23:18:18.391784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.042 [2024-07-24 23:18:18.391803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.042 [2024-07-24 23:18:18.391813] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.042 [2024-07-24 23:18:18.391822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.042 [2024-07-24 23:18:18.391840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.042 qpair failed and we were unable to recover it. 00:32:46.042 [2024-07-24 23:18:18.401673] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.042 [2024-07-24 23:18:18.401754] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.042 [2024-07-24 23:18:18.401774] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.042 [2024-07-24 23:18:18.401784] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.042 [2024-07-24 23:18:18.401793] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.042 [2024-07-24 23:18:18.401809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.042 qpair failed and we were unable to recover it. 00:32:46.042 [2024-07-24 23:18:18.411669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.042 [2024-07-24 23:18:18.411862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.042 [2024-07-24 23:18:18.411881] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.042 [2024-07-24 23:18:18.411890] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.042 [2024-07-24 23:18:18.411899] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.042 [2024-07-24 23:18:18.411917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.042 qpair failed and we were unable to recover it. 00:32:46.042 [2024-07-24 23:18:18.421758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.042 [2024-07-24 23:18:18.421839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.042 [2024-07-24 23:18:18.421867] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.042 [2024-07-24 23:18:18.421877] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.042 [2024-07-24 23:18:18.421886] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.042 [2024-07-24 23:18:18.421903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.042 qpair failed and we were unable to recover it. 00:32:46.042 [2024-07-24 23:18:18.431728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.042 [2024-07-24 23:18:18.431858] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.042 [2024-07-24 23:18:18.431877] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.042 [2024-07-24 23:18:18.431887] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.042 [2024-07-24 23:18:18.431896] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.042 [2024-07-24 23:18:18.431913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.042 qpair failed and we were unable to recover it. 00:32:46.042 [2024-07-24 23:18:18.441783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.042 [2024-07-24 23:18:18.441868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.042 [2024-07-24 23:18:18.441887] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.042 [2024-07-24 23:18:18.441897] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.042 [2024-07-24 23:18:18.441907] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.042 [2024-07-24 23:18:18.441924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.042 qpair failed and we were unable to recover it. 00:32:46.042 [2024-07-24 23:18:18.451761] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.042 [2024-07-24 23:18:18.451843] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.042 [2024-07-24 23:18:18.451863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.042 [2024-07-24 23:18:18.451872] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.042 [2024-07-24 23:18:18.451882] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.042 [2024-07-24 23:18:18.451900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.042 qpair failed and we were unable to recover it. 00:32:46.042 [2024-07-24 23:18:18.461816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.042 [2024-07-24 23:18:18.461895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.042 [2024-07-24 23:18:18.461914] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.042 [2024-07-24 23:18:18.461924] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.042 [2024-07-24 23:18:18.461934] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.042 [2024-07-24 23:18:18.461954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.042 qpair failed and we were unable to recover it. 00:32:46.302 [2024-07-24 23:18:18.471808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.302 [2024-07-24 23:18:18.471890] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.302 [2024-07-24 23:18:18.471909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.302 [2024-07-24 23:18:18.471918] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.302 [2024-07-24 23:18:18.471928] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.302 [2024-07-24 23:18:18.471944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.302 qpair failed and we were unable to recover it. 00:32:46.302 [2024-07-24 23:18:18.482008] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.302 [2024-07-24 23:18:18.482091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.302 [2024-07-24 23:18:18.482110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.302 [2024-07-24 23:18:18.482119] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.302 [2024-07-24 23:18:18.482129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.302 [2024-07-24 23:18:18.482146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.302 qpair failed and we were unable to recover it. 00:32:46.302 [2024-07-24 23:18:18.491961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.302 [2024-07-24 23:18:18.492046] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.302 [2024-07-24 23:18:18.492064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.302 [2024-07-24 23:18:18.492074] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.302 [2024-07-24 23:18:18.492083] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.302 [2024-07-24 23:18:18.492100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.302 qpair failed and we were unable to recover it. 00:32:46.302 [2024-07-24 23:18:18.502072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.302 [2024-07-24 23:18:18.502154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.302 [2024-07-24 23:18:18.502173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.302 [2024-07-24 23:18:18.502183] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.302 [2024-07-24 23:18:18.502191] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.302 [2024-07-24 23:18:18.502208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.302 qpair failed and we were unable to recover it. 00:32:46.302 [2024-07-24 23:18:18.511943] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.302 [2024-07-24 23:18:18.512103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.302 [2024-07-24 23:18:18.512124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.302 [2024-07-24 23:18:18.512134] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.302 [2024-07-24 23:18:18.512143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.302 [2024-07-24 23:18:18.512160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.302 qpair failed and we were unable to recover it. 00:32:46.302 [2024-07-24 23:18:18.521972] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.302 [2024-07-24 23:18:18.522061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.302 [2024-07-24 23:18:18.522080] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.302 [2024-07-24 23:18:18.522090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.302 [2024-07-24 23:18:18.522099] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.302 [2024-07-24 23:18:18.522116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.302 qpair failed and we were unable to recover it. 00:32:46.302 [2024-07-24 23:18:18.532036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.302 [2024-07-24 23:18:18.532116] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.302 [2024-07-24 23:18:18.532135] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.302 [2024-07-24 23:18:18.532144] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.302 [2024-07-24 23:18:18.532153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.302 [2024-07-24 23:18:18.532171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.302 qpair failed and we were unable to recover it. 00:32:46.302 [2024-07-24 23:18:18.542050] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.302 [2024-07-24 23:18:18.542135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.302 [2024-07-24 23:18:18.542154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.302 [2024-07-24 23:18:18.542163] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.302 [2024-07-24 23:18:18.542173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.302 [2024-07-24 23:18:18.542190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.302 qpair failed and we were unable to recover it. 00:32:46.302 [2024-07-24 23:18:18.552060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.302 [2024-07-24 23:18:18.552142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.302 [2024-07-24 23:18:18.552161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.302 [2024-07-24 23:18:18.552171] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.302 [2024-07-24 23:18:18.552179] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.302 [2024-07-24 23:18:18.552200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.302 qpair failed and we were unable to recover it. 00:32:46.302 [2024-07-24 23:18:18.562092] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.302 [2024-07-24 23:18:18.562188] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.302 [2024-07-24 23:18:18.562206] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.303 [2024-07-24 23:18:18.562215] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.303 [2024-07-24 23:18:18.562224] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.303 [2024-07-24 23:18:18.562241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.303 qpair failed and we were unable to recover it. 00:32:46.303 [2024-07-24 23:18:18.572194] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.303 [2024-07-24 23:18:18.572268] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.303 [2024-07-24 23:18:18.572286] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.303 [2024-07-24 23:18:18.572296] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.303 [2024-07-24 23:18:18.572305] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.303 [2024-07-24 23:18:18.572322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.303 qpair failed and we were unable to recover it. 00:32:46.303 [2024-07-24 23:18:18.582224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.303 [2024-07-24 23:18:18.582302] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.303 [2024-07-24 23:18:18.582320] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.303 [2024-07-24 23:18:18.582330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.303 [2024-07-24 23:18:18.582339] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.303 [2024-07-24 23:18:18.582356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.303 qpair failed and we were unable to recover it. 00:32:46.303 [2024-07-24 23:18:18.592260] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.303 [2024-07-24 23:18:18.592342] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.303 [2024-07-24 23:18:18.592361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.303 [2024-07-24 23:18:18.592370] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.303 [2024-07-24 23:18:18.592379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.303 [2024-07-24 23:18:18.592396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.303 qpair failed and we were unable to recover it. 00:32:46.303 [2024-07-24 23:18:18.602248] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.303 [2024-07-24 23:18:18.602327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.303 [2024-07-24 23:18:18.602349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.303 [2024-07-24 23:18:18.602359] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.303 [2024-07-24 23:18:18.602368] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.303 [2024-07-24 23:18:18.602384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.303 qpair failed and we were unable to recover it. 00:32:46.303 [2024-07-24 23:18:18.612296] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.303 [2024-07-24 23:18:18.612407] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.303 [2024-07-24 23:18:18.612426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.303 [2024-07-24 23:18:18.612435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.303 [2024-07-24 23:18:18.612445] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.303 [2024-07-24 23:18:18.612462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.303 qpair failed and we were unable to recover it. 00:32:46.303 [2024-07-24 23:18:18.622336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.303 [2024-07-24 23:18:18.622416] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.303 [2024-07-24 23:18:18.622434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.303 [2024-07-24 23:18:18.622444] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.303 [2024-07-24 23:18:18.622454] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.303 [2024-07-24 23:18:18.622470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.303 qpair failed and we were unable to recover it. 00:32:46.303 [2024-07-24 23:18:18.632361] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.303 [2024-07-24 23:18:18.632439] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.303 [2024-07-24 23:18:18.632458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.303 [2024-07-24 23:18:18.632468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.303 [2024-07-24 23:18:18.632477] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.303 [2024-07-24 23:18:18.632494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.303 qpair failed and we were unable to recover it. 00:32:46.303 [2024-07-24 23:18:18.642291] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.303 [2024-07-24 23:18:18.642373] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.303 [2024-07-24 23:18:18.642392] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.303 [2024-07-24 23:18:18.642401] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.303 [2024-07-24 23:18:18.642410] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.303 [2024-07-24 23:18:18.642430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.303 qpair failed and we were unable to recover it. 00:32:46.303 [2024-07-24 23:18:18.652416] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.303 [2024-07-24 23:18:18.652496] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.303 [2024-07-24 23:18:18.652514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.303 [2024-07-24 23:18:18.652524] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.303 [2024-07-24 23:18:18.652532] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.303 [2024-07-24 23:18:18.652550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.303 qpair failed and we were unable to recover it. 00:32:46.303 [2024-07-24 23:18:18.662445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.303 [2024-07-24 23:18:18.662521] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.303 [2024-07-24 23:18:18.662539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.303 [2024-07-24 23:18:18.662549] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.303 [2024-07-24 23:18:18.662559] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.303 [2024-07-24 23:18:18.662576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.303 qpair failed and we were unable to recover it. 00:32:46.303 [2024-07-24 23:18:18.672457] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.303 [2024-07-24 23:18:18.672536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.303 [2024-07-24 23:18:18.672555] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.303 [2024-07-24 23:18:18.672564] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.303 [2024-07-24 23:18:18.672574] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.303 [2024-07-24 23:18:18.672591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.303 qpair failed and we were unable to recover it. 00:32:46.303 [2024-07-24 23:18:18.682468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.303 [2024-07-24 23:18:18.682548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.303 [2024-07-24 23:18:18.682566] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.303 [2024-07-24 23:18:18.682576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.303 [2024-07-24 23:18:18.682585] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.303 [2024-07-24 23:18:18.682602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.303 qpair failed and we were unable to recover it. 00:32:46.303 [2024-07-24 23:18:18.692519] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.303 [2024-07-24 23:18:18.692600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.303 [2024-07-24 23:18:18.692621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.303 [2024-07-24 23:18:18.692631] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.303 [2024-07-24 23:18:18.692640] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.303 [2024-07-24 23:18:18.692657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.304 qpair failed and we were unable to recover it. 00:32:46.304 [2024-07-24 23:18:18.702494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.304 [2024-07-24 23:18:18.702576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.304 [2024-07-24 23:18:18.702593] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.304 [2024-07-24 23:18:18.702603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.304 [2024-07-24 23:18:18.702611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.304 [2024-07-24 23:18:18.702628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.304 qpair failed and we were unable to recover it. 00:32:46.304 [2024-07-24 23:18:18.712575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.304 [2024-07-24 23:18:18.712656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.304 [2024-07-24 23:18:18.712677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.304 [2024-07-24 23:18:18.712687] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.304 [2024-07-24 23:18:18.712697] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.304 [2024-07-24 23:18:18.712719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.304 qpair failed and we were unable to recover it. 00:32:46.304 [2024-07-24 23:18:18.722600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.304 [2024-07-24 23:18:18.722683] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.304 [2024-07-24 23:18:18.722702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.304 [2024-07-24 23:18:18.722712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.304 [2024-07-24 23:18:18.722726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.304 [2024-07-24 23:18:18.722743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.304 qpair failed and we were unable to recover it. 00:32:46.563 [2024-07-24 23:18:18.732632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.563 [2024-07-24 23:18:18.732712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.563 [2024-07-24 23:18:18.732734] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.563 [2024-07-24 23:18:18.732743] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.563 [2024-07-24 23:18:18.732755] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.563 [2024-07-24 23:18:18.732773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.563 qpair failed and we were unable to recover it. 00:32:46.563 [2024-07-24 23:18:18.742654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.563 [2024-07-24 23:18:18.742736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.563 [2024-07-24 23:18:18.742755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.563 [2024-07-24 23:18:18.742764] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.563 [2024-07-24 23:18:18.742774] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.563 [2024-07-24 23:18:18.742792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.563 qpair failed and we were unable to recover it. 00:32:46.563 [2024-07-24 23:18:18.752699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.563 [2024-07-24 23:18:18.752788] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.563 [2024-07-24 23:18:18.752806] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.563 [2024-07-24 23:18:18.752816] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.563 [2024-07-24 23:18:18.752824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.563 [2024-07-24 23:18:18.752841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.563 qpair failed and we were unable to recover it. 00:32:46.563 [2024-07-24 23:18:18.762645] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.563 [2024-07-24 23:18:18.762733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.563 [2024-07-24 23:18:18.762752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.563 [2024-07-24 23:18:18.762761] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.563 [2024-07-24 23:18:18.762770] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.563 [2024-07-24 23:18:18.762787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.563 qpair failed and we were unable to recover it. 00:32:46.563 [2024-07-24 23:18:18.772753] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.563 [2024-07-24 23:18:18.772837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.563 [2024-07-24 23:18:18.772856] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.563 [2024-07-24 23:18:18.772865] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.563 [2024-07-24 23:18:18.772874] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.563 [2024-07-24 23:18:18.772891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.563 qpair failed and we were unable to recover it. 00:32:46.563 [2024-07-24 23:18:18.782786] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.563 [2024-07-24 23:18:18.782862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.563 [2024-07-24 23:18:18.782883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.563 [2024-07-24 23:18:18.782893] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.563 [2024-07-24 23:18:18.782901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.563 [2024-07-24 23:18:18.782918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.563 qpair failed and we were unable to recover it. 00:32:46.563 [2024-07-24 23:18:18.792818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.563 [2024-07-24 23:18:18.792904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.563 [2024-07-24 23:18:18.792923] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.563 [2024-07-24 23:18:18.792932] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.563 [2024-07-24 23:18:18.792941] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.563 [2024-07-24 23:18:18.792957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.563 qpair failed and we were unable to recover it. 00:32:46.563 [2024-07-24 23:18:18.802812] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.563 [2024-07-24 23:18:18.802903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.563 [2024-07-24 23:18:18.802922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.563 [2024-07-24 23:18:18.802932] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.563 [2024-07-24 23:18:18.802940] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.563 [2024-07-24 23:18:18.802957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.563 qpair failed and we were unable to recover it. 00:32:46.563 [2024-07-24 23:18:18.812859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.563 [2024-07-24 23:18:18.812940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.563 [2024-07-24 23:18:18.812959] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.563 [2024-07-24 23:18:18.812969] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.563 [2024-07-24 23:18:18.812977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.563 [2024-07-24 23:18:18.812995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.563 qpair failed and we were unable to recover it. 00:32:46.563 [2024-07-24 23:18:18.822903] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.563 [2024-07-24 23:18:18.823012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.563 [2024-07-24 23:18:18.823031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.563 [2024-07-24 23:18:18.823040] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.563 [2024-07-24 23:18:18.823052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.563 [2024-07-24 23:18:18.823069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.563 qpair failed and we were unable to recover it. 00:32:46.563 [2024-07-24 23:18:18.832926] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.563 [2024-07-24 23:18:18.833008] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.564 [2024-07-24 23:18:18.833027] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.564 [2024-07-24 23:18:18.833037] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.564 [2024-07-24 23:18:18.833045] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.564 [2024-07-24 23:18:18.833061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.564 qpair failed and we were unable to recover it. 00:32:46.564 [2024-07-24 23:18:18.842954] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.564 [2024-07-24 23:18:18.843033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.564 [2024-07-24 23:18:18.843051] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.564 [2024-07-24 23:18:18.843061] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.564 [2024-07-24 23:18:18.843069] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.564 [2024-07-24 23:18:18.843086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.564 qpair failed and we were unable to recover it. 00:32:46.564 [2024-07-24 23:18:18.852979] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.564 [2024-07-24 23:18:18.853060] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.564 [2024-07-24 23:18:18.853079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.564 [2024-07-24 23:18:18.853089] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.564 [2024-07-24 23:18:18.853097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.564 [2024-07-24 23:18:18.853114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.564 qpair failed and we were unable to recover it. 00:32:46.564 [2024-07-24 23:18:18.863012] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.564 [2024-07-24 23:18:18.863094] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.564 [2024-07-24 23:18:18.863113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.564 [2024-07-24 23:18:18.863122] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.564 [2024-07-24 23:18:18.863131] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.564 [2024-07-24 23:18:18.863147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.564 qpair failed and we were unable to recover it. 00:32:46.564 [2024-07-24 23:18:18.873026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.564 [2024-07-24 23:18:18.873112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.564 [2024-07-24 23:18:18.873131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.564 [2024-07-24 23:18:18.873140] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.564 [2024-07-24 23:18:18.873149] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.564 [2024-07-24 23:18:18.873166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.564 qpair failed and we were unable to recover it. 00:32:46.564 [2024-07-24 23:18:18.883073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.564 [2024-07-24 23:18:18.883154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.564 [2024-07-24 23:18:18.883173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.564 [2024-07-24 23:18:18.883182] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.564 [2024-07-24 23:18:18.883191] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.564 [2024-07-24 23:18:18.883207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.564 qpair failed and we were unable to recover it. 00:32:46.564 [2024-07-24 23:18:18.893091] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.564 [2024-07-24 23:18:18.893172] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.564 [2024-07-24 23:18:18.893190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.564 [2024-07-24 23:18:18.893199] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.564 [2024-07-24 23:18:18.893208] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.564 [2024-07-24 23:18:18.893224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.564 qpair failed and we were unable to recover it. 00:32:46.564 [2024-07-24 23:18:18.903203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.564 [2024-07-24 23:18:18.903288] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.564 [2024-07-24 23:18:18.903307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.564 [2024-07-24 23:18:18.903317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.564 [2024-07-24 23:18:18.903325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.564 [2024-07-24 23:18:18.903342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.564 qpair failed and we were unable to recover it. 00:32:46.564 [2024-07-24 23:18:18.913076] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.564 [2024-07-24 23:18:18.913165] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.564 [2024-07-24 23:18:18.913183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.564 [2024-07-24 23:18:18.913193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.564 [2024-07-24 23:18:18.913205] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.564 [2024-07-24 23:18:18.913221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.564 qpair failed and we were unable to recover it. 00:32:46.564 [2024-07-24 23:18:18.923133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.564 [2024-07-24 23:18:18.923227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.564 [2024-07-24 23:18:18.923246] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.564 [2024-07-24 23:18:18.923255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.564 [2024-07-24 23:18:18.923264] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.564 [2024-07-24 23:18:18.923281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.564 qpair failed and we were unable to recover it. 00:32:46.564 [2024-07-24 23:18:18.933239] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.564 [2024-07-24 23:18:18.933351] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.564 [2024-07-24 23:18:18.933369] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.564 [2024-07-24 23:18:18.933379] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.564 [2024-07-24 23:18:18.933387] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.564 [2024-07-24 23:18:18.933404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.564 qpair failed and we were unable to recover it. 00:32:46.564 [2024-07-24 23:18:18.943244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.564 [2024-07-24 23:18:18.943324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.564 [2024-07-24 23:18:18.943343] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.564 [2024-07-24 23:18:18.943353] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.564 [2024-07-24 23:18:18.943362] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.564 [2024-07-24 23:18:18.943378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.564 qpair failed and we were unable to recover it. 00:32:46.564 [2024-07-24 23:18:18.953268] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.564 [2024-07-24 23:18:18.953353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.564 [2024-07-24 23:18:18.953371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.564 [2024-07-24 23:18:18.953381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.564 [2024-07-24 23:18:18.953389] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.564 [2024-07-24 23:18:18.953406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.564 qpair failed and we were unable to recover it. 00:32:46.564 [2024-07-24 23:18:18.963300] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.564 [2024-07-24 23:18:18.963375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.564 [2024-07-24 23:18:18.963395] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.564 [2024-07-24 23:18:18.963404] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.565 [2024-07-24 23:18:18.963412] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.565 [2024-07-24 23:18:18.963429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.565 qpair failed and we were unable to recover it. 00:32:46.565 [2024-07-24 23:18:18.973330] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.565 [2024-07-24 23:18:18.973412] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.565 [2024-07-24 23:18:18.973430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.565 [2024-07-24 23:18:18.973440] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.565 [2024-07-24 23:18:18.973448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.565 [2024-07-24 23:18:18.973464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.565 qpair failed and we were unable to recover it. 00:32:46.565 [2024-07-24 23:18:18.983394] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.565 [2024-07-24 23:18:18.983502] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.565 [2024-07-24 23:18:18.983520] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.565 [2024-07-24 23:18:18.983530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.565 [2024-07-24 23:18:18.983538] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.565 [2024-07-24 23:18:18.983554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.565 qpair failed and we were unable to recover it. 00:32:46.824 [2024-07-24 23:18:18.993388] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.824 [2024-07-24 23:18:18.993468] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.824 [2024-07-24 23:18:18.993486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.824 [2024-07-24 23:18:18.993496] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.824 [2024-07-24 23:18:18.993504] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.824 [2024-07-24 23:18:18.993521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-07-24 23:18:19.003417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.824 [2024-07-24 23:18:19.003496] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.824 [2024-07-24 23:18:19.003515] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.824 [2024-07-24 23:18:19.003524] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.824 [2024-07-24 23:18:19.003535] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.824 [2024-07-24 23:18:19.003552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-07-24 23:18:19.013370] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.824 [2024-07-24 23:18:19.013450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.824 [2024-07-24 23:18:19.013468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.824 [2024-07-24 23:18:19.013478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.824 [2024-07-24 23:18:19.013486] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.824 [2024-07-24 23:18:19.013503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-07-24 23:18:19.023469] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.824 [2024-07-24 23:18:19.023549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.824 [2024-07-24 23:18:19.023568] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.824 [2024-07-24 23:18:19.023577] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.824 [2024-07-24 23:18:19.023586] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.824 [2024-07-24 23:18:19.023602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-07-24 23:18:19.033494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.824 [2024-07-24 23:18:19.033571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.824 [2024-07-24 23:18:19.033590] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.824 [2024-07-24 23:18:19.033600] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.824 [2024-07-24 23:18:19.033608] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.824 [2024-07-24 23:18:19.033625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-07-24 23:18:19.043513] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.824 [2024-07-24 23:18:19.043598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.825 [2024-07-24 23:18:19.043616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.825 [2024-07-24 23:18:19.043626] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.825 [2024-07-24 23:18:19.043634] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.825 [2024-07-24 23:18:19.043651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-07-24 23:18:19.053563] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.825 [2024-07-24 23:18:19.053643] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.825 [2024-07-24 23:18:19.053662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.825 [2024-07-24 23:18:19.053672] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.825 [2024-07-24 23:18:19.053680] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.825 [2024-07-24 23:18:19.053696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-07-24 23:18:19.063585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.825 [2024-07-24 23:18:19.063666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.825 [2024-07-24 23:18:19.063684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.825 [2024-07-24 23:18:19.063694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.825 [2024-07-24 23:18:19.063702] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.825 [2024-07-24 23:18:19.063721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-07-24 23:18:19.073593] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.825 [2024-07-24 23:18:19.073676] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.825 [2024-07-24 23:18:19.073694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.825 [2024-07-24 23:18:19.073703] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.825 [2024-07-24 23:18:19.073712] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.825 [2024-07-24 23:18:19.073732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-07-24 23:18:19.083668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.825 [2024-07-24 23:18:19.083744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.825 [2024-07-24 23:18:19.083763] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.825 [2024-07-24 23:18:19.083772] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.825 [2024-07-24 23:18:19.083781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.825 [2024-07-24 23:18:19.083797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-07-24 23:18:19.093691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.825 [2024-07-24 23:18:19.093773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.825 [2024-07-24 23:18:19.093791] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.825 [2024-07-24 23:18:19.093807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.825 [2024-07-24 23:18:19.093816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.825 [2024-07-24 23:18:19.093832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-07-24 23:18:19.103688] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.825 [2024-07-24 23:18:19.103780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.825 [2024-07-24 23:18:19.103799] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.825 [2024-07-24 23:18:19.103809] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.825 [2024-07-24 23:18:19.103817] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.825 [2024-07-24 23:18:19.103833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-07-24 23:18:19.113697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.825 [2024-07-24 23:18:19.113785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.825 [2024-07-24 23:18:19.113804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.825 [2024-07-24 23:18:19.113813] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.825 [2024-07-24 23:18:19.113822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.825 [2024-07-24 23:18:19.113838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-07-24 23:18:19.123757] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.825 [2024-07-24 23:18:19.123841] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.825 [2024-07-24 23:18:19.123860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.825 [2024-07-24 23:18:19.123870] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.825 [2024-07-24 23:18:19.123878] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.825 [2024-07-24 23:18:19.123895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-07-24 23:18:19.133784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.825 [2024-07-24 23:18:19.133862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.825 [2024-07-24 23:18:19.133880] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.825 [2024-07-24 23:18:19.133890] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.825 [2024-07-24 23:18:19.133898] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.825 [2024-07-24 23:18:19.133915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-07-24 23:18:19.143827] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.825 [2024-07-24 23:18:19.143903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.825 [2024-07-24 23:18:19.143921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.825 [2024-07-24 23:18:19.143931] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.825 [2024-07-24 23:18:19.143939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.825 [2024-07-24 23:18:19.143955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-07-24 23:18:19.153849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.825 [2024-07-24 23:18:19.153930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.825 [2024-07-24 23:18:19.153949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.825 [2024-07-24 23:18:19.153958] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.825 [2024-07-24 23:18:19.153967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.825 [2024-07-24 23:18:19.153983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-07-24 23:18:19.163864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.825 [2024-07-24 23:18:19.163946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.825 [2024-07-24 23:18:19.163965] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.825 [2024-07-24 23:18:19.163975] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.825 [2024-07-24 23:18:19.163983] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.825 [2024-07-24 23:18:19.163999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-07-24 23:18:19.173897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.825 [2024-07-24 23:18:19.173978] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.825 [2024-07-24 23:18:19.173996] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.825 [2024-07-24 23:18:19.174006] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.826 [2024-07-24 23:18:19.174015] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.826 [2024-07-24 23:18:19.174031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-07-24 23:18:19.183932] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.826 [2024-07-24 23:18:19.184010] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.826 [2024-07-24 23:18:19.184028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.826 [2024-07-24 23:18:19.184041] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.826 [2024-07-24 23:18:19.184049] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.826 [2024-07-24 23:18:19.184066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-07-24 23:18:19.193952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.826 [2024-07-24 23:18:19.194034] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.826 [2024-07-24 23:18:19.194052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.826 [2024-07-24 23:18:19.194062] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.826 [2024-07-24 23:18:19.194071] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.826 [2024-07-24 23:18:19.194087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-07-24 23:18:19.203976] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.826 [2024-07-24 23:18:19.204052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.826 [2024-07-24 23:18:19.204070] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.826 [2024-07-24 23:18:19.204080] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.826 [2024-07-24 23:18:19.204088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.826 [2024-07-24 23:18:19.204104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-07-24 23:18:19.214009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.826 [2024-07-24 23:18:19.214093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.826 [2024-07-24 23:18:19.214111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.826 [2024-07-24 23:18:19.214120] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.826 [2024-07-24 23:18:19.214129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.826 [2024-07-24 23:18:19.214145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-07-24 23:18:19.224043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.826 [2024-07-24 23:18:19.224126] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.826 [2024-07-24 23:18:19.224145] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.826 [2024-07-24 23:18:19.224155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.826 [2024-07-24 23:18:19.224163] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.826 [2024-07-24 23:18:19.224179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-07-24 23:18:19.234061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.826 [2024-07-24 23:18:19.234151] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.826 [2024-07-24 23:18:19.234170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.826 [2024-07-24 23:18:19.234180] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.826 [2024-07-24 23:18:19.234189] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.826 [2024-07-24 23:18:19.234206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-07-24 23:18:19.244098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.826 [2024-07-24 23:18:19.244180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.826 [2024-07-24 23:18:19.244199] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.826 [2024-07-24 23:18:19.244209] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.826 [2024-07-24 23:18:19.244218] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:46.826 [2024-07-24 23:18:19.244234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:46.826 qpair failed and we were unable to recover it. 00:32:47.086 [2024-07-24 23:18:19.254103] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.086 [2024-07-24 23:18:19.254242] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.086 [2024-07-24 23:18:19.254261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.086 [2024-07-24 23:18:19.254271] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.086 [2024-07-24 23:18:19.254280] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.086 [2024-07-24 23:18:19.254297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.086 qpair failed and we were unable to recover it. 00:32:47.086 [2024-07-24 23:18:19.264166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.086 [2024-07-24 23:18:19.264249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.086 [2024-07-24 23:18:19.264269] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.086 [2024-07-24 23:18:19.264279] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.086 [2024-07-24 23:18:19.264288] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.086 [2024-07-24 23:18:19.264305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.086 qpair failed and we were unable to recover it. 00:32:47.086 [2024-07-24 23:18:19.274133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.086 [2024-07-24 23:18:19.274214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.086 [2024-07-24 23:18:19.274233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.086 [2024-07-24 23:18:19.274245] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.086 [2024-07-24 23:18:19.274254] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.086 [2024-07-24 23:18:19.274270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.086 qpair failed and we were unable to recover it. 00:32:47.086 [2024-07-24 23:18:19.284201] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.086 [2024-07-24 23:18:19.284279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.086 [2024-07-24 23:18:19.284297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.086 [2024-07-24 23:18:19.284307] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.086 [2024-07-24 23:18:19.284315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.086 [2024-07-24 23:18:19.284332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.086 qpair failed and we were unable to recover it. 00:32:47.086 [2024-07-24 23:18:19.294229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.086 [2024-07-24 23:18:19.294303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.086 [2024-07-24 23:18:19.294322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.086 [2024-07-24 23:18:19.294332] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.086 [2024-07-24 23:18:19.294340] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.086 [2024-07-24 23:18:19.294356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.086 qpair failed and we were unable to recover it. 00:32:47.086 [2024-07-24 23:18:19.304261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.086 [2024-07-24 23:18:19.304343] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.086 [2024-07-24 23:18:19.304362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.086 [2024-07-24 23:18:19.304371] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.086 [2024-07-24 23:18:19.304380] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.086 [2024-07-24 23:18:19.304397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.086 qpair failed and we were unable to recover it. 00:32:47.086 [2024-07-24 23:18:19.314279] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.086 [2024-07-24 23:18:19.314355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.086 [2024-07-24 23:18:19.314374] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.086 [2024-07-24 23:18:19.314383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.086 [2024-07-24 23:18:19.314392] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.086 [2024-07-24 23:18:19.314408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.086 qpair failed and we were unable to recover it. 00:32:47.086 [2024-07-24 23:18:19.324343] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.086 [2024-07-24 23:18:19.324427] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.086 [2024-07-24 23:18:19.324445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.086 [2024-07-24 23:18:19.324455] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.086 [2024-07-24 23:18:19.324463] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.087 [2024-07-24 23:18:19.324479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.087 qpair failed and we were unable to recover it. 00:32:47.087 [2024-07-24 23:18:19.334337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.087 [2024-07-24 23:18:19.334419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.087 [2024-07-24 23:18:19.334438] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.087 [2024-07-24 23:18:19.334447] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.087 [2024-07-24 23:18:19.334456] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.087 [2024-07-24 23:18:19.334472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.087 qpair failed and we were unable to recover it. 00:32:47.087 [2024-07-24 23:18:19.344392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.087 [2024-07-24 23:18:19.344473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.087 [2024-07-24 23:18:19.344492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.087 [2024-07-24 23:18:19.344502] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.087 [2024-07-24 23:18:19.344511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.087 [2024-07-24 23:18:19.344528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.087 qpair failed and we were unable to recover it. 00:32:47.087 [2024-07-24 23:18:19.354417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.087 [2024-07-24 23:18:19.354497] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.087 [2024-07-24 23:18:19.354516] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.087 [2024-07-24 23:18:19.354526] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.087 [2024-07-24 23:18:19.354534] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.087 [2024-07-24 23:18:19.354551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.087 qpair failed and we were unable to recover it. 00:32:47.087 [2024-07-24 23:18:19.364396] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.087 [2024-07-24 23:18:19.364482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.087 [2024-07-24 23:18:19.364501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.087 [2024-07-24 23:18:19.364513] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.087 [2024-07-24 23:18:19.364522] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.087 [2024-07-24 23:18:19.364539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.087 qpair failed and we were unable to recover it. 00:32:47.087 [2024-07-24 23:18:19.374480] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.087 [2024-07-24 23:18:19.374554] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.087 [2024-07-24 23:18:19.374572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.087 [2024-07-24 23:18:19.374582] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.087 [2024-07-24 23:18:19.374590] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.087 [2024-07-24 23:18:19.374606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.087 qpair failed and we were unable to recover it. 00:32:47.087 [2024-07-24 23:18:19.384516] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.087 [2024-07-24 23:18:19.384595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.087 [2024-07-24 23:18:19.384614] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.087 [2024-07-24 23:18:19.384624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.087 [2024-07-24 23:18:19.384632] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.087 [2024-07-24 23:18:19.384649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.087 qpair failed and we were unable to recover it. 00:32:47.087 [2024-07-24 23:18:19.394535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.087 [2024-07-24 23:18:19.394620] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.087 [2024-07-24 23:18:19.394639] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.087 [2024-07-24 23:18:19.394648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.087 [2024-07-24 23:18:19.394657] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.087 [2024-07-24 23:18:19.394674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.087 qpair failed and we were unable to recover it. 00:32:47.087 [2024-07-24 23:18:19.404544] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.087 [2024-07-24 23:18:19.404617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.087 [2024-07-24 23:18:19.404636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.087 [2024-07-24 23:18:19.404645] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.087 [2024-07-24 23:18:19.404654] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.087 [2024-07-24 23:18:19.404670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.087 qpair failed and we were unable to recover it. 00:32:47.087 [2024-07-24 23:18:19.414579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.087 [2024-07-24 23:18:19.414656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.087 [2024-07-24 23:18:19.414675] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.087 [2024-07-24 23:18:19.414684] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.087 [2024-07-24 23:18:19.414693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.087 [2024-07-24 23:18:19.414709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.087 qpair failed and we were unable to recover it. 00:32:47.087 [2024-07-24 23:18:19.424628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.087 [2024-07-24 23:18:19.424707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.087 [2024-07-24 23:18:19.424730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.087 [2024-07-24 23:18:19.424739] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.087 [2024-07-24 23:18:19.424748] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.087 [2024-07-24 23:18:19.424764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.087 qpair failed and we were unable to recover it. 00:32:47.087 [2024-07-24 23:18:19.434662] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.087 [2024-07-24 23:18:19.434744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.087 [2024-07-24 23:18:19.434763] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.087 [2024-07-24 23:18:19.434772] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.087 [2024-07-24 23:18:19.434781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.087 [2024-07-24 23:18:19.434798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.087 qpair failed and we were unable to recover it. 00:32:47.087 [2024-07-24 23:18:19.444631] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.087 [2024-07-24 23:18:19.444718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.087 [2024-07-24 23:18:19.444737] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.087 [2024-07-24 23:18:19.444747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.087 [2024-07-24 23:18:19.444755] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.087 [2024-07-24 23:18:19.444772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.087 qpair failed and we were unable to recover it. 00:32:47.087 [2024-07-24 23:18:19.454697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.087 [2024-07-24 23:18:19.454784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.087 [2024-07-24 23:18:19.454806] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.087 [2024-07-24 23:18:19.454818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.087 [2024-07-24 23:18:19.454826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.087 [2024-07-24 23:18:19.454843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.087 qpair failed and we were unable to recover it. 00:32:47.087 [2024-07-24 23:18:19.464729] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.088 [2024-07-24 23:18:19.464825] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.088 [2024-07-24 23:18:19.464844] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.088 [2024-07-24 23:18:19.464853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.088 [2024-07-24 23:18:19.464862] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.088 [2024-07-24 23:18:19.464878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.088 qpair failed and we were unable to recover it. 00:32:47.088 [2024-07-24 23:18:19.474766] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.088 [2024-07-24 23:18:19.474847] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.088 [2024-07-24 23:18:19.474865] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.088 [2024-07-24 23:18:19.474875] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.088 [2024-07-24 23:18:19.474884] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.088 [2024-07-24 23:18:19.474901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.088 qpair failed and we were unable to recover it. 00:32:47.088 [2024-07-24 23:18:19.484797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.088 [2024-07-24 23:18:19.484880] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.088 [2024-07-24 23:18:19.484898] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.088 [2024-07-24 23:18:19.484908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.088 [2024-07-24 23:18:19.484916] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.088 [2024-07-24 23:18:19.484933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.088 qpair failed and we were unable to recover it. 00:32:47.088 [2024-07-24 23:18:19.494809] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.088 [2024-07-24 23:18:19.494931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.088 [2024-07-24 23:18:19.494949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.088 [2024-07-24 23:18:19.494959] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.088 [2024-07-24 23:18:19.494967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.088 [2024-07-24 23:18:19.494984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.088 qpair failed and we were unable to recover it. 00:32:47.088 [2024-07-24 23:18:19.504831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.088 [2024-07-24 23:18:19.504909] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.088 [2024-07-24 23:18:19.504926] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.088 [2024-07-24 23:18:19.504935] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.088 [2024-07-24 23:18:19.504943] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.088 [2024-07-24 23:18:19.504960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.088 qpair failed and we were unable to recover it. 00:32:47.347 [2024-07-24 23:18:19.514871] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.348 [2024-07-24 23:18:19.514948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.348 [2024-07-24 23:18:19.514965] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.348 [2024-07-24 23:18:19.514975] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.348 [2024-07-24 23:18:19.514983] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.348 [2024-07-24 23:18:19.515000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.348 [2024-07-24 23:18:19.524892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.348 [2024-07-24 23:18:19.524974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.348 [2024-07-24 23:18:19.524992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.348 [2024-07-24 23:18:19.525002] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.348 [2024-07-24 23:18:19.525010] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.348 [2024-07-24 23:18:19.525026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.348 [2024-07-24 23:18:19.534936] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.348 [2024-07-24 23:18:19.535017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.348 [2024-07-24 23:18:19.535035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.348 [2024-07-24 23:18:19.535045] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.348 [2024-07-24 23:18:19.535053] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.348 [2024-07-24 23:18:19.535070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.348 [2024-07-24 23:18:19.544955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.348 [2024-07-24 23:18:19.545044] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.348 [2024-07-24 23:18:19.545065] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.348 [2024-07-24 23:18:19.545074] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.348 [2024-07-24 23:18:19.545083] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.348 [2024-07-24 23:18:19.545099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.348 [2024-07-24 23:18:19.554967] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.348 [2024-07-24 23:18:19.555046] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.348 [2024-07-24 23:18:19.555065] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.348 [2024-07-24 23:18:19.555075] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.348 [2024-07-24 23:18:19.555083] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.348 [2024-07-24 23:18:19.555099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.348 [2024-07-24 23:18:19.565067] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.348 [2024-07-24 23:18:19.565173] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.348 [2024-07-24 23:18:19.565192] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.348 [2024-07-24 23:18:19.565202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.348 [2024-07-24 23:18:19.565211] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.348 [2024-07-24 23:18:19.565227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.348 [2024-07-24 23:18:19.575067] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.348 [2024-07-24 23:18:19.575197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.348 [2024-07-24 23:18:19.575216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.348 [2024-07-24 23:18:19.575225] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.348 [2024-07-24 23:18:19.575233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.348 [2024-07-24 23:18:19.575250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.348 [2024-07-24 23:18:19.585066] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.348 [2024-07-24 23:18:19.585226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.348 [2024-07-24 23:18:19.585245] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.348 [2024-07-24 23:18:19.585254] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.348 [2024-07-24 23:18:19.585263] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.348 [2024-07-24 23:18:19.585280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.348 [2024-07-24 23:18:19.595096] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.348 [2024-07-24 23:18:19.595203] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.348 [2024-07-24 23:18:19.595221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.348 [2024-07-24 23:18:19.595231] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.348 [2024-07-24 23:18:19.595239] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.348 [2024-07-24 23:18:19.595256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.348 [2024-07-24 23:18:19.605135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.348 [2024-07-24 23:18:19.605219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.348 [2024-07-24 23:18:19.605237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.348 [2024-07-24 23:18:19.605247] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.348 [2024-07-24 23:18:19.605255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.348 [2024-07-24 23:18:19.605272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.348 [2024-07-24 23:18:19.615092] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.348 [2024-07-24 23:18:19.615169] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.348 [2024-07-24 23:18:19.615188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.348 [2024-07-24 23:18:19.615198] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.348 [2024-07-24 23:18:19.615206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.348 [2024-07-24 23:18:19.615223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.348 [2024-07-24 23:18:19.625153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.348 [2024-07-24 23:18:19.625232] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.348 [2024-07-24 23:18:19.625250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.348 [2024-07-24 23:18:19.625260] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.348 [2024-07-24 23:18:19.625269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.348 [2024-07-24 23:18:19.625286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.348 [2024-07-24 23:18:19.635231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.348 [2024-07-24 23:18:19.635312] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.348 [2024-07-24 23:18:19.635334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.348 [2024-07-24 23:18:19.635343] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.348 [2024-07-24 23:18:19.635352] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.348 [2024-07-24 23:18:19.635369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.348 [2024-07-24 23:18:19.645234] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.349 [2024-07-24 23:18:19.645313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.349 [2024-07-24 23:18:19.645331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.349 [2024-07-24 23:18:19.645341] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.349 [2024-07-24 23:18:19.645350] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.349 [2024-07-24 23:18:19.645366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-07-24 23:18:19.655274] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.349 [2024-07-24 23:18:19.655352] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.349 [2024-07-24 23:18:19.655371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.349 [2024-07-24 23:18:19.655381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.349 [2024-07-24 23:18:19.655389] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.349 [2024-07-24 23:18:19.655406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-07-24 23:18:19.665254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.349 [2024-07-24 23:18:19.665379] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.349 [2024-07-24 23:18:19.665397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.349 [2024-07-24 23:18:19.665407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.349 [2024-07-24 23:18:19.665415] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.349 [2024-07-24 23:18:19.665431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-07-24 23:18:19.675320] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.349 [2024-07-24 23:18:19.675402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.349 [2024-07-24 23:18:19.675421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.349 [2024-07-24 23:18:19.675431] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.349 [2024-07-24 23:18:19.675439] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.349 [2024-07-24 23:18:19.675459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-07-24 23:18:19.685330] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.349 [2024-07-24 23:18:19.685408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.349 [2024-07-24 23:18:19.685426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.349 [2024-07-24 23:18:19.685437] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.349 [2024-07-24 23:18:19.685445] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.349 [2024-07-24 23:18:19.685461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-07-24 23:18:19.695385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.349 [2024-07-24 23:18:19.695463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.349 [2024-07-24 23:18:19.695482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.349 [2024-07-24 23:18:19.695492] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.349 [2024-07-24 23:18:19.695500] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.349 [2024-07-24 23:18:19.695516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-07-24 23:18:19.705410] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.349 [2024-07-24 23:18:19.705490] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.349 [2024-07-24 23:18:19.705508] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.349 [2024-07-24 23:18:19.705517] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.349 [2024-07-24 23:18:19.705526] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.349 [2024-07-24 23:18:19.705541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-07-24 23:18:19.715398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.349 [2024-07-24 23:18:19.715481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.349 [2024-07-24 23:18:19.715501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.349 [2024-07-24 23:18:19.715511] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.349 [2024-07-24 23:18:19.715520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.349 [2024-07-24 23:18:19.715537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-07-24 23:18:19.725464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.349 [2024-07-24 23:18:19.725593] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.349 [2024-07-24 23:18:19.725616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.349 [2024-07-24 23:18:19.725625] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.349 [2024-07-24 23:18:19.725634] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.349 [2024-07-24 23:18:19.725651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-07-24 23:18:19.735488] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.349 [2024-07-24 23:18:19.735564] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.349 [2024-07-24 23:18:19.735583] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.349 [2024-07-24 23:18:19.735593] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.349 [2024-07-24 23:18:19.735602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.349 [2024-07-24 23:18:19.735619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-07-24 23:18:19.745528] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.349 [2024-07-24 23:18:19.745608] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.349 [2024-07-24 23:18:19.745625] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.349 [2024-07-24 23:18:19.745634] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.349 [2024-07-24 23:18:19.745643] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.349 [2024-07-24 23:18:19.745660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-07-24 23:18:19.755557] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.349 [2024-07-24 23:18:19.755635] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.349 [2024-07-24 23:18:19.755654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.349 [2024-07-24 23:18:19.755664] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.349 [2024-07-24 23:18:19.755672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.349 [2024-07-24 23:18:19.755689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-07-24 23:18:19.765583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.349 [2024-07-24 23:18:19.765660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.349 [2024-07-24 23:18:19.765679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.349 [2024-07-24 23:18:19.765688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.349 [2024-07-24 23:18:19.765697] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.349 [2024-07-24 23:18:19.765724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-07-24 23:18:19.775623] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.349 [2024-07-24 23:18:19.775707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.349 [2024-07-24 23:18:19.775731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.350 [2024-07-24 23:18:19.775741] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.350 [2024-07-24 23:18:19.775749] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.350 [2024-07-24 23:18:19.775766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.350 qpair failed and we were unable to recover it. 00:32:47.610 [2024-07-24 23:18:19.785622] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.610 [2024-07-24 23:18:19.785703] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.610 [2024-07-24 23:18:19.785727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.610 [2024-07-24 23:18:19.785736] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.610 [2024-07-24 23:18:19.785745] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.610 [2024-07-24 23:18:19.785762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.610 qpair failed and we were unable to recover it. 00:32:47.610 [2024-07-24 23:18:19.795648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.610 [2024-07-24 23:18:19.795733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.610 [2024-07-24 23:18:19.795752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.610 [2024-07-24 23:18:19.795761] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.610 [2024-07-24 23:18:19.795770] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.610 [2024-07-24 23:18:19.795786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.610 qpair failed and we were unable to recover it. 00:32:47.610 [2024-07-24 23:18:19.805699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.610 [2024-07-24 23:18:19.805790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.610 [2024-07-24 23:18:19.805809] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.610 [2024-07-24 23:18:19.805819] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.610 [2024-07-24 23:18:19.805828] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.610 [2024-07-24 23:18:19.805844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.610 qpair failed and we were unable to recover it. 00:32:47.610 [2024-07-24 23:18:19.815713] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.610 [2024-07-24 23:18:19.815800] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.610 [2024-07-24 23:18:19.815822] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.610 [2024-07-24 23:18:19.815832] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.610 [2024-07-24 23:18:19.815841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.610 [2024-07-24 23:18:19.815857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.610 qpair failed and we were unable to recover it. 00:32:47.610 [2024-07-24 23:18:19.825772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.610 [2024-07-24 23:18:19.825856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.610 [2024-07-24 23:18:19.825875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.610 [2024-07-24 23:18:19.825884] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.610 [2024-07-24 23:18:19.825893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.610 [2024-07-24 23:18:19.825910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.610 qpair failed and we were unable to recover it. 00:32:47.610 [2024-07-24 23:18:19.835738] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.610 [2024-07-24 23:18:19.835814] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.610 [2024-07-24 23:18:19.835832] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.610 [2024-07-24 23:18:19.835842] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.610 [2024-07-24 23:18:19.835850] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.610 [2024-07-24 23:18:19.835866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.610 qpair failed and we were unable to recover it. 00:32:47.610 [2024-07-24 23:18:19.845813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.610 [2024-07-24 23:18:19.845935] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.610 [2024-07-24 23:18:19.845955] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.610 [2024-07-24 23:18:19.845964] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.610 [2024-07-24 23:18:19.845973] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.610 [2024-07-24 23:18:19.845990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.610 qpair failed and we were unable to recover it. 00:32:47.610 [2024-07-24 23:18:19.855845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.610 [2024-07-24 23:18:19.855927] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.610 [2024-07-24 23:18:19.855945] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.610 [2024-07-24 23:18:19.855955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.611 [2024-07-24 23:18:19.855964] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.611 [2024-07-24 23:18:19.855984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.611 qpair failed and we were unable to recover it. 00:32:47.611 [2024-07-24 23:18:19.865883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.611 [2024-07-24 23:18:19.866041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.611 [2024-07-24 23:18:19.866061] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.611 [2024-07-24 23:18:19.866071] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.611 [2024-07-24 23:18:19.866080] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.611 [2024-07-24 23:18:19.866097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.611 qpair failed and we were unable to recover it. 00:32:47.611 [2024-07-24 23:18:19.875915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.611 [2024-07-24 23:18:19.875995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.611 [2024-07-24 23:18:19.876014] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.611 [2024-07-24 23:18:19.876024] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.611 [2024-07-24 23:18:19.876032] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.611 [2024-07-24 23:18:19.876049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.611 qpair failed and we were unable to recover it. 00:32:47.611 [2024-07-24 23:18:19.885868] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.611 [2024-07-24 23:18:19.885960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.611 [2024-07-24 23:18:19.885978] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.611 [2024-07-24 23:18:19.885988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.611 [2024-07-24 23:18:19.885996] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.611 [2024-07-24 23:18:19.886012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.611 qpair failed and we were unable to recover it. 00:32:47.611 [2024-07-24 23:18:19.895942] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.611 [2024-07-24 23:18:19.896020] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.611 [2024-07-24 23:18:19.896040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.611 [2024-07-24 23:18:19.896049] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.611 [2024-07-24 23:18:19.896058] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.611 [2024-07-24 23:18:19.896075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.611 qpair failed and we were unable to recover it. 00:32:47.611 [2024-07-24 23:18:19.905970] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.611 [2024-07-24 23:18:19.906056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.611 [2024-07-24 23:18:19.906078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.611 [2024-07-24 23:18:19.906088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.611 [2024-07-24 23:18:19.906096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.611 [2024-07-24 23:18:19.906114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.611 qpair failed and we were unable to recover it. 00:32:47.611 [2024-07-24 23:18:19.915931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.611 [2024-07-24 23:18:19.916010] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.611 [2024-07-24 23:18:19.916028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.611 [2024-07-24 23:18:19.916037] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.611 [2024-07-24 23:18:19.916046] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.611 [2024-07-24 23:18:19.916063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.611 qpair failed and we were unable to recover it. 00:32:47.611 [2024-07-24 23:18:19.926121] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.611 [2024-07-24 23:18:19.926276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.611 [2024-07-24 23:18:19.926295] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.611 [2024-07-24 23:18:19.926305] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.611 [2024-07-24 23:18:19.926314] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.611 [2024-07-24 23:18:19.926331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.611 qpair failed and we were unable to recover it. 00:32:47.611 [2024-07-24 23:18:19.935996] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.611 [2024-07-24 23:18:19.936075] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.611 [2024-07-24 23:18:19.936093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.611 [2024-07-24 23:18:19.936103] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.611 [2024-07-24 23:18:19.936111] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.611 [2024-07-24 23:18:19.936128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.611 qpair failed and we were unable to recover it. 00:32:47.611 [2024-07-24 23:18:19.946123] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.611 [2024-07-24 23:18:19.946224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.611 [2024-07-24 23:18:19.946242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.611 [2024-07-24 23:18:19.946252] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.611 [2024-07-24 23:18:19.946264] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.611 [2024-07-24 23:18:19.946281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.611 qpair failed and we were unable to recover it. 00:32:47.611 [2024-07-24 23:18:19.956057] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.611 [2024-07-24 23:18:19.956144] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.611 [2024-07-24 23:18:19.956163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.611 [2024-07-24 23:18:19.956173] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.611 [2024-07-24 23:18:19.956181] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.611 [2024-07-24 23:18:19.956198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.611 qpair failed and we were unable to recover it. 00:32:47.611 [2024-07-24 23:18:19.966231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.611 [2024-07-24 23:18:19.966311] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.611 [2024-07-24 23:18:19.966330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.611 [2024-07-24 23:18:19.966339] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.611 [2024-07-24 23:18:19.966348] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.611 [2024-07-24 23:18:19.966365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.611 qpair failed and we were unable to recover it. 00:32:47.611 [2024-07-24 23:18:19.976172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.611 [2024-07-24 23:18:19.976255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.611 [2024-07-24 23:18:19.976274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.611 [2024-07-24 23:18:19.976283] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.611 [2024-07-24 23:18:19.976292] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.611 [2024-07-24 23:18:19.976308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.611 qpair failed and we were unable to recover it. 00:32:47.611 [2024-07-24 23:18:19.986212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.611 [2024-07-24 23:18:19.986291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.611 [2024-07-24 23:18:19.986310] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.611 [2024-07-24 23:18:19.986319] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.611 [2024-07-24 23:18:19.986328] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.611 [2024-07-24 23:18:19.986344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.611 qpair failed and we were unable to recover it. 00:32:47.612 [2024-07-24 23:18:19.996170] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.612 [2024-07-24 23:18:19.996257] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.612 [2024-07-24 23:18:19.996275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.612 [2024-07-24 23:18:19.996285] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.612 [2024-07-24 23:18:19.996294] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.612 [2024-07-24 23:18:19.996310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.612 qpair failed and we were unable to recover it. 00:32:47.612 [2024-07-24 23:18:20.006243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.612 [2024-07-24 23:18:20.006410] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.612 [2024-07-24 23:18:20.006429] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.612 [2024-07-24 23:18:20.006439] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.612 [2024-07-24 23:18:20.006447] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.612 [2024-07-24 23:18:20.006465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.612 qpair failed and we were unable to recover it. 00:32:47.612 [2024-07-24 23:18:20.016218] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.612 [2024-07-24 23:18:20.016299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.612 [2024-07-24 23:18:20.016317] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.612 [2024-07-24 23:18:20.016327] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.612 [2024-07-24 23:18:20.016336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.612 [2024-07-24 23:18:20.016352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.612 qpair failed and we were unable to recover it. 00:32:47.612 [2024-07-24 23:18:20.026347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.612 [2024-07-24 23:18:20.026465] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.612 [2024-07-24 23:18:20.026488] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.612 [2024-07-24 23:18:20.026499] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.612 [2024-07-24 23:18:20.026509] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.612 [2024-07-24 23:18:20.026528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.612 qpair failed and we were unable to recover it. 00:32:47.612 [2024-07-24 23:18:20.036371] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.612 [2024-07-24 23:18:20.036454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.612 [2024-07-24 23:18:20.036474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.612 [2024-07-24 23:18:20.036484] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.612 [2024-07-24 23:18:20.036495] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.612 [2024-07-24 23:18:20.036513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.612 qpair failed and we were unable to recover it. 00:32:47.872 [2024-07-24 23:18:20.046358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.872 [2024-07-24 23:18:20.046474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.872 [2024-07-24 23:18:20.046493] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.872 [2024-07-24 23:18:20.046503] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.872 [2024-07-24 23:18:20.046512] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.872 [2024-07-24 23:18:20.046530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.872 qpair failed and we were unable to recover it. 00:32:47.872 [2024-07-24 23:18:20.056455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.873 [2024-07-24 23:18:20.056537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.873 [2024-07-24 23:18:20.056556] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.873 [2024-07-24 23:18:20.056566] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.873 [2024-07-24 23:18:20.056575] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.873 [2024-07-24 23:18:20.056592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.873 qpair failed and we were unable to recover it. 00:32:47.873 [2024-07-24 23:18:20.066469] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.873 [2024-07-24 23:18:20.066545] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.873 [2024-07-24 23:18:20.066564] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.873 [2024-07-24 23:18:20.066574] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.873 [2024-07-24 23:18:20.066582] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.873 [2024-07-24 23:18:20.066599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.873 qpair failed and we were unable to recover it. 00:32:47.873 [2024-07-24 23:18:20.076478] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.873 [2024-07-24 23:18:20.076583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.873 [2024-07-24 23:18:20.076604] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.873 [2024-07-24 23:18:20.076614] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.873 [2024-07-24 23:18:20.076624] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.873 [2024-07-24 23:18:20.076643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.873 qpair failed and we were unable to recover it. 00:32:47.873 [2024-07-24 23:18:20.086510] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.873 [2024-07-24 23:18:20.086601] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.873 [2024-07-24 23:18:20.086622] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.873 [2024-07-24 23:18:20.086632] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.873 [2024-07-24 23:18:20.086641] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.873 [2024-07-24 23:18:20.086659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.873 qpair failed and we were unable to recover it. 00:32:47.873 [2024-07-24 23:18:20.096569] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.873 [2024-07-24 23:18:20.096651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.873 [2024-07-24 23:18:20.096670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.873 [2024-07-24 23:18:20.096680] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.873 [2024-07-24 23:18:20.096689] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.873 [2024-07-24 23:18:20.096705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.873 qpair failed and we were unable to recover it. 00:32:47.873 [2024-07-24 23:18:20.106568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.873 [2024-07-24 23:18:20.106648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.873 [2024-07-24 23:18:20.106667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.873 [2024-07-24 23:18:20.106677] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.873 [2024-07-24 23:18:20.106685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.873 [2024-07-24 23:18:20.106702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.873 qpair failed and we were unable to recover it. 00:32:47.873 [2024-07-24 23:18:20.116587] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.873 [2024-07-24 23:18:20.116670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.873 [2024-07-24 23:18:20.116689] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.873 [2024-07-24 23:18:20.116699] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.873 [2024-07-24 23:18:20.116707] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.873 [2024-07-24 23:18:20.116730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.873 qpair failed and we were unable to recover it. 00:32:47.873 [2024-07-24 23:18:20.126563] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.873 [2024-07-24 23:18:20.126657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.873 [2024-07-24 23:18:20.126676] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.873 [2024-07-24 23:18:20.126686] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.873 [2024-07-24 23:18:20.126698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.873 [2024-07-24 23:18:20.126719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.873 qpair failed and we were unable to recover it. 00:32:47.873 [2024-07-24 23:18:20.136652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.873 [2024-07-24 23:18:20.136738] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.873 [2024-07-24 23:18:20.136758] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.873 [2024-07-24 23:18:20.136767] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.873 [2024-07-24 23:18:20.136776] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.873 [2024-07-24 23:18:20.136793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.873 qpair failed and we were unable to recover it. 00:32:47.873 [2024-07-24 23:18:20.146691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.873 [2024-07-24 23:18:20.146773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.873 [2024-07-24 23:18:20.146792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.873 [2024-07-24 23:18:20.146802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.873 [2024-07-24 23:18:20.146811] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.873 [2024-07-24 23:18:20.146827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.873 qpair failed and we were unable to recover it. 00:32:47.873 [2024-07-24 23:18:20.156628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.873 [2024-07-24 23:18:20.156717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.873 [2024-07-24 23:18:20.156737] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.873 [2024-07-24 23:18:20.156747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.873 [2024-07-24 23:18:20.156755] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.873 [2024-07-24 23:18:20.156772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.873 qpair failed and we were unable to recover it. 00:32:47.873 [2024-07-24 23:18:20.166712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.873 [2024-07-24 23:18:20.166798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.873 [2024-07-24 23:18:20.166817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.873 [2024-07-24 23:18:20.166827] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.873 [2024-07-24 23:18:20.166835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.873 [2024-07-24 23:18:20.166852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.873 qpair failed and we were unable to recover it. 00:32:47.873 [2024-07-24 23:18:20.176775] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.873 [2024-07-24 23:18:20.176862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.873 [2024-07-24 23:18:20.176881] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.873 [2024-07-24 23:18:20.176891] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.873 [2024-07-24 23:18:20.176899] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.873 [2024-07-24 23:18:20.176916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.873 qpair failed and we were unable to recover it. 00:32:47.873 [2024-07-24 23:18:20.186756] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.873 [2024-07-24 23:18:20.186839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.873 [2024-07-24 23:18:20.186858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.873 [2024-07-24 23:18:20.186867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.874 [2024-07-24 23:18:20.186876] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.874 [2024-07-24 23:18:20.186892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.874 qpair failed and we were unable to recover it. 00:32:47.874 [2024-07-24 23:18:20.196810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.874 [2024-07-24 23:18:20.196898] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.874 [2024-07-24 23:18:20.196917] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.874 [2024-07-24 23:18:20.196927] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.874 [2024-07-24 23:18:20.196936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.874 [2024-07-24 23:18:20.196952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.874 qpair failed and we were unable to recover it. 00:32:47.874 [2024-07-24 23:18:20.206883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.874 [2024-07-24 23:18:20.207004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.874 [2024-07-24 23:18:20.207022] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.874 [2024-07-24 23:18:20.207032] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.874 [2024-07-24 23:18:20.207040] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.874 [2024-07-24 23:18:20.207057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.874 qpair failed and we were unable to recover it. 00:32:47.874 [2024-07-24 23:18:20.216875] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.874 [2024-07-24 23:18:20.216966] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.874 [2024-07-24 23:18:20.216984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.874 [2024-07-24 23:18:20.216994] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.874 [2024-07-24 23:18:20.217005] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.874 [2024-07-24 23:18:20.217022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.874 qpair failed and we were unable to recover it. 00:32:47.874 [2024-07-24 23:18:20.226905] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.874 [2024-07-24 23:18:20.226998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.874 [2024-07-24 23:18:20.227016] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.874 [2024-07-24 23:18:20.227026] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.874 [2024-07-24 23:18:20.227034] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.874 [2024-07-24 23:18:20.227051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.874 qpair failed and we were unable to recover it. 00:32:47.874 [2024-07-24 23:18:20.236944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.874 [2024-07-24 23:18:20.237029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.874 [2024-07-24 23:18:20.237047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.874 [2024-07-24 23:18:20.237057] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.874 [2024-07-24 23:18:20.237066] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.874 [2024-07-24 23:18:20.237082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.874 qpair failed and we were unable to recover it. 00:32:47.874 [2024-07-24 23:18:20.246918] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.874 [2024-07-24 23:18:20.247000] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.874 [2024-07-24 23:18:20.247018] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.874 [2024-07-24 23:18:20.247028] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.874 [2024-07-24 23:18:20.247037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.874 [2024-07-24 23:18:20.247053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.874 qpair failed and we were unable to recover it. 00:32:47.874 [2024-07-24 23:18:20.257039] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.874 [2024-07-24 23:18:20.257147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.874 [2024-07-24 23:18:20.257167] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.874 [2024-07-24 23:18:20.257178] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.874 [2024-07-24 23:18:20.257187] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.874 [2024-07-24 23:18:20.257204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.874 qpair failed and we were unable to recover it. 00:32:47.874 [2024-07-24 23:18:20.267031] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.874 [2024-07-24 23:18:20.267122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.874 [2024-07-24 23:18:20.267141] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.874 [2024-07-24 23:18:20.267150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.874 [2024-07-24 23:18:20.267159] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.874 [2024-07-24 23:18:20.267175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.874 qpair failed and we were unable to recover it. 00:32:47.874 [2024-07-24 23:18:20.277052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.874 [2024-07-24 23:18:20.277130] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.874 [2024-07-24 23:18:20.277149] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.874 [2024-07-24 23:18:20.277158] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.874 [2024-07-24 23:18:20.277167] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.874 [2024-07-24 23:18:20.277183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.874 qpair failed and we were unable to recover it. 00:32:47.874 [2024-07-24 23:18:20.287051] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.874 [2024-07-24 23:18:20.287130] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.874 [2024-07-24 23:18:20.287148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.874 [2024-07-24 23:18:20.287158] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.874 [2024-07-24 23:18:20.287166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.874 [2024-07-24 23:18:20.287183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.874 qpair failed and we were unable to recover it. 00:32:47.874 [2024-07-24 23:18:20.297087] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:47.874 [2024-07-24 23:18:20.297168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:47.874 [2024-07-24 23:18:20.297186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:47.874 [2024-07-24 23:18:20.297196] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:47.874 [2024-07-24 23:18:20.297204] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:47.874 [2024-07-24 23:18:20.297221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:47.874 qpair failed and we were unable to recover it. 00:32:48.135 [2024-07-24 23:18:20.307091] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.135 [2024-07-24 23:18:20.307170] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.135 [2024-07-24 23:18:20.307190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.135 [2024-07-24 23:18:20.307202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.135 [2024-07-24 23:18:20.307211] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.135 [2024-07-24 23:18:20.307229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.135 qpair failed and we were unable to recover it. 00:32:48.135 [2024-07-24 23:18:20.317158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.135 [2024-07-24 23:18:20.317244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.135 [2024-07-24 23:18:20.317263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.135 [2024-07-24 23:18:20.317273] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.135 [2024-07-24 23:18:20.317282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.135 [2024-07-24 23:18:20.317298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.135 qpair failed and we were unable to recover it. 00:32:48.135 [2024-07-24 23:18:20.327137] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.135 [2024-07-24 23:18:20.327212] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.135 [2024-07-24 23:18:20.327231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.135 [2024-07-24 23:18:20.327241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.135 [2024-07-24 23:18:20.327249] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.135 [2024-07-24 23:18:20.327266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.135 qpair failed and we were unable to recover it. 00:32:48.135 [2024-07-24 23:18:20.337225] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.135 [2024-07-24 23:18:20.337309] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.135 [2024-07-24 23:18:20.337328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.135 [2024-07-24 23:18:20.337337] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.135 [2024-07-24 23:18:20.337346] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.135 [2024-07-24 23:18:20.337362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.135 qpair failed and we were unable to recover it. 00:32:48.135 [2024-07-24 23:18:20.347271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.135 [2024-07-24 23:18:20.347353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.135 [2024-07-24 23:18:20.347371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.135 [2024-07-24 23:18:20.347381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.135 [2024-07-24 23:18:20.347389] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.135 [2024-07-24 23:18:20.347406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.135 qpair failed and we were unable to recover it. 00:32:48.135 [2024-07-24 23:18:20.357322] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.135 [2024-07-24 23:18:20.357400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.135 [2024-07-24 23:18:20.357420] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.135 [2024-07-24 23:18:20.357430] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.135 [2024-07-24 23:18:20.357438] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.135 [2024-07-24 23:18:20.357456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.135 qpair failed and we were unable to recover it. 00:32:48.135 [2024-07-24 23:18:20.367323] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.135 [2024-07-24 23:18:20.367402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.135 [2024-07-24 23:18:20.367422] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.135 [2024-07-24 23:18:20.367431] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.135 [2024-07-24 23:18:20.367440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.135 [2024-07-24 23:18:20.367457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.135 qpair failed and we were unable to recover it. 00:32:48.135 [2024-07-24 23:18:20.377373] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.135 [2024-07-24 23:18:20.377550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.135 [2024-07-24 23:18:20.377569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.135 [2024-07-24 23:18:20.377579] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.135 [2024-07-24 23:18:20.377587] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.135 [2024-07-24 23:18:20.377604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.135 qpair failed and we were unable to recover it. 00:32:48.135 [2024-07-24 23:18:20.387452] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.135 [2024-07-24 23:18:20.387577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.135 [2024-07-24 23:18:20.387596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.135 [2024-07-24 23:18:20.387606] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.135 [2024-07-24 23:18:20.387614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.135 [2024-07-24 23:18:20.387632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.135 qpair failed and we were unable to recover it. 00:32:48.135 [2024-07-24 23:18:20.397358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.135 [2024-07-24 23:18:20.397436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.135 [2024-07-24 23:18:20.397454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.135 [2024-07-24 23:18:20.397466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.135 [2024-07-24 23:18:20.397474] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.135 [2024-07-24 23:18:20.397491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.135 qpair failed and we were unable to recover it. 00:32:48.135 [2024-07-24 23:18:20.407443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.135 [2024-07-24 23:18:20.407523] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.135 [2024-07-24 23:18:20.407542] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.135 [2024-07-24 23:18:20.407551] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.135 [2024-07-24 23:18:20.407560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.135 [2024-07-24 23:18:20.407576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.135 qpair failed and we were unable to recover it. 00:32:48.135 [2024-07-24 23:18:20.417536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.136 [2024-07-24 23:18:20.417619] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.136 [2024-07-24 23:18:20.417636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.136 [2024-07-24 23:18:20.417645] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.136 [2024-07-24 23:18:20.417654] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.136 [2024-07-24 23:18:20.417671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.136 qpair failed and we were unable to recover it. 00:32:48.136 [2024-07-24 23:18:20.427527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.136 [2024-07-24 23:18:20.427606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.136 [2024-07-24 23:18:20.427625] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.136 [2024-07-24 23:18:20.427634] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.136 [2024-07-24 23:18:20.427643] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.136 [2024-07-24 23:18:20.427659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.136 qpair failed and we were unable to recover it. 00:32:48.136 [2024-07-24 23:18:20.437510] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.136 [2024-07-24 23:18:20.437591] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.136 [2024-07-24 23:18:20.437610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.136 [2024-07-24 23:18:20.437619] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.136 [2024-07-24 23:18:20.437628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.136 [2024-07-24 23:18:20.437644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.136 qpair failed and we were unable to recover it. 00:32:48.136 [2024-07-24 23:18:20.447654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.136 [2024-07-24 23:18:20.447743] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.136 [2024-07-24 23:18:20.447762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.136 [2024-07-24 23:18:20.447771] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.136 [2024-07-24 23:18:20.447780] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.136 [2024-07-24 23:18:20.447797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.136 qpair failed and we were unable to recover it. 00:32:48.136 [2024-07-24 23:18:20.457622] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.136 [2024-07-24 23:18:20.457701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.136 [2024-07-24 23:18:20.457725] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.136 [2024-07-24 23:18:20.457735] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.136 [2024-07-24 23:18:20.457744] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.136 [2024-07-24 23:18:20.457761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.136 qpair failed and we were unable to recover it. 00:32:48.136 [2024-07-24 23:18:20.467606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.136 [2024-07-24 23:18:20.467684] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.136 [2024-07-24 23:18:20.467702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.136 [2024-07-24 23:18:20.467711] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.136 [2024-07-24 23:18:20.467724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.136 [2024-07-24 23:18:20.467741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.136 qpair failed and we were unable to recover it. 00:32:48.136 [2024-07-24 23:18:20.477583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.136 [2024-07-24 23:18:20.477675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.136 [2024-07-24 23:18:20.477694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.136 [2024-07-24 23:18:20.477703] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.136 [2024-07-24 23:18:20.477711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.136 [2024-07-24 23:18:20.477731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.136 qpair failed and we were unable to recover it. 00:32:48.136 [2024-07-24 23:18:20.487646] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.136 [2024-07-24 23:18:20.487730] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.136 [2024-07-24 23:18:20.487748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.136 [2024-07-24 23:18:20.487761] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.136 [2024-07-24 23:18:20.487769] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.136 [2024-07-24 23:18:20.487786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.136 qpair failed and we were unable to recover it. 00:32:48.136 [2024-07-24 23:18:20.497616] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.136 [2024-07-24 23:18:20.497693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.136 [2024-07-24 23:18:20.497711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.136 [2024-07-24 23:18:20.497724] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.136 [2024-07-24 23:18:20.497733] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.136 [2024-07-24 23:18:20.497749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.136 qpair failed and we were unable to recover it. 00:32:48.136 [2024-07-24 23:18:20.507719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.136 [2024-07-24 23:18:20.507803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.136 [2024-07-24 23:18:20.507822] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.136 [2024-07-24 23:18:20.507831] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.136 [2024-07-24 23:18:20.507839] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.136 [2024-07-24 23:18:20.507856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.136 qpair failed and we were unable to recover it. 00:32:48.136 [2024-07-24 23:18:20.517764] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.136 [2024-07-24 23:18:20.517849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.136 [2024-07-24 23:18:20.517868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.136 [2024-07-24 23:18:20.517877] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.136 [2024-07-24 23:18:20.517886] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.136 [2024-07-24 23:18:20.517902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.136 qpair failed and we were unable to recover it. 00:32:48.136 [2024-07-24 23:18:20.527777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.136 [2024-07-24 23:18:20.527861] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.136 [2024-07-24 23:18:20.527879] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.136 [2024-07-24 23:18:20.527888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.136 [2024-07-24 23:18:20.527897] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.136 [2024-07-24 23:18:20.527914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.136 qpair failed and we were unable to recover it. 00:32:48.136 [2024-07-24 23:18:20.537805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.136 [2024-07-24 23:18:20.537887] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.136 [2024-07-24 23:18:20.537906] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.136 [2024-07-24 23:18:20.537916] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.136 [2024-07-24 23:18:20.537924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.136 [2024-07-24 23:18:20.537941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.136 qpair failed and we were unable to recover it. 00:32:48.136 [2024-07-24 23:18:20.547824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.136 [2024-07-24 23:18:20.547988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.136 [2024-07-24 23:18:20.548007] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.137 [2024-07-24 23:18:20.548016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.137 [2024-07-24 23:18:20.548025] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.137 [2024-07-24 23:18:20.548042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.137 qpair failed and we were unable to recover it. 00:32:48.137 [2024-07-24 23:18:20.557894] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.137 [2024-07-24 23:18:20.557984] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.137 [2024-07-24 23:18:20.558002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.137 [2024-07-24 23:18:20.558012] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.137 [2024-07-24 23:18:20.558021] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.137 [2024-07-24 23:18:20.558038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.137 qpair failed and we were unable to recover it. 00:32:48.397 [2024-07-24 23:18:20.567921] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.397 [2024-07-24 23:18:20.567999] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.397 [2024-07-24 23:18:20.568017] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.397 [2024-07-24 23:18:20.568027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.397 [2024-07-24 23:18:20.568035] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.397 [2024-07-24 23:18:20.568051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-07-24 23:18:20.577944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.397 [2024-07-24 23:18:20.578028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.397 [2024-07-24 23:18:20.578047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.397 [2024-07-24 23:18:20.578059] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.397 [2024-07-24 23:18:20.578068] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.397 [2024-07-24 23:18:20.578085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-07-24 23:18:20.587955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.397 [2024-07-24 23:18:20.588035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.397 [2024-07-24 23:18:20.588054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.397 [2024-07-24 23:18:20.588063] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.397 [2024-07-24 23:18:20.588072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.397 [2024-07-24 23:18:20.588089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-07-24 23:18:20.597968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.397 [2024-07-24 23:18:20.598053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.397 [2024-07-24 23:18:20.598072] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.397 [2024-07-24 23:18:20.598081] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.397 [2024-07-24 23:18:20.598090] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.397 [2024-07-24 23:18:20.598107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-07-24 23:18:20.608038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.397 [2024-07-24 23:18:20.608145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.397 [2024-07-24 23:18:20.608164] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.397 [2024-07-24 23:18:20.608173] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.397 [2024-07-24 23:18:20.608182] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.397 [2024-07-24 23:18:20.608198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-07-24 23:18:20.618066] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.397 [2024-07-24 23:18:20.618145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.397 [2024-07-24 23:18:20.618164] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.397 [2024-07-24 23:18:20.618173] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.397 [2024-07-24 23:18:20.618182] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.397 [2024-07-24 23:18:20.618198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-07-24 23:18:20.628003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.397 [2024-07-24 23:18:20.628083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.397 [2024-07-24 23:18:20.628101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.397 [2024-07-24 23:18:20.628111] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.397 [2024-07-24 23:18:20.628119] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.398 [2024-07-24 23:18:20.628136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-07-24 23:18:20.638005] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.398 [2024-07-24 23:18:20.638090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.398 [2024-07-24 23:18:20.638108] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.398 [2024-07-24 23:18:20.638118] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.398 [2024-07-24 23:18:20.638126] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.398 [2024-07-24 23:18:20.638143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-07-24 23:18:20.648112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.398 [2024-07-24 23:18:20.648191] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.398 [2024-07-24 23:18:20.648209] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.398 [2024-07-24 23:18:20.648219] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.398 [2024-07-24 23:18:20.648227] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.398 [2024-07-24 23:18:20.648244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-07-24 23:18:20.658125] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.398 [2024-07-24 23:18:20.658207] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.398 [2024-07-24 23:18:20.658225] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.398 [2024-07-24 23:18:20.658235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.398 [2024-07-24 23:18:20.658243] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.398 [2024-07-24 23:18:20.658260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-07-24 23:18:20.668159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.398 [2024-07-24 23:18:20.668239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.398 [2024-07-24 23:18:20.668259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.398 [2024-07-24 23:18:20.668269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.398 [2024-07-24 23:18:20.668278] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.398 [2024-07-24 23:18:20.668294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-07-24 23:18:20.678195] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.398 [2024-07-24 23:18:20.678361] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.398 [2024-07-24 23:18:20.678380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.398 [2024-07-24 23:18:20.678389] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.398 [2024-07-24 23:18:20.678398] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.398 [2024-07-24 23:18:20.678414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-07-24 23:18:20.688244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.398 [2024-07-24 23:18:20.688322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.398 [2024-07-24 23:18:20.688341] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.398 [2024-07-24 23:18:20.688350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.398 [2024-07-24 23:18:20.688359] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.398 [2024-07-24 23:18:20.688375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-07-24 23:18:20.698158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.398 [2024-07-24 23:18:20.698240] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.398 [2024-07-24 23:18:20.698258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.398 [2024-07-24 23:18:20.698268] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.398 [2024-07-24 23:18:20.698276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.398 [2024-07-24 23:18:20.698292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-07-24 23:18:20.708280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.398 [2024-07-24 23:18:20.708359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.398 [2024-07-24 23:18:20.708379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.398 [2024-07-24 23:18:20.708389] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.398 [2024-07-24 23:18:20.708397] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.398 [2024-07-24 23:18:20.708414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-07-24 23:18:20.718276] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.398 [2024-07-24 23:18:20.718352] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.398 [2024-07-24 23:18:20.718371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.398 [2024-07-24 23:18:20.718381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.398 [2024-07-24 23:18:20.718389] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.398 [2024-07-24 23:18:20.718407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-07-24 23:18:20.728319] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.398 [2024-07-24 23:18:20.728392] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.398 [2024-07-24 23:18:20.728411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.398 [2024-07-24 23:18:20.728420] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.398 [2024-07-24 23:18:20.728429] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.398 [2024-07-24 23:18:20.728445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-07-24 23:18:20.738360] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.398 [2024-07-24 23:18:20.738432] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.398 [2024-07-24 23:18:20.738450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.398 [2024-07-24 23:18:20.738460] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.398 [2024-07-24 23:18:20.738468] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.398 [2024-07-24 23:18:20.738485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-07-24 23:18:20.748382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.399 [2024-07-24 23:18:20.748462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.399 [2024-07-24 23:18:20.748482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.399 [2024-07-24 23:18:20.748492] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.399 [2024-07-24 23:18:20.748500] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.399 [2024-07-24 23:18:20.748517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-07-24 23:18:20.758415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.399 [2024-07-24 23:18:20.758499] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.399 [2024-07-24 23:18:20.758521] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.399 [2024-07-24 23:18:20.758531] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.399 [2024-07-24 23:18:20.758539] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.399 [2024-07-24 23:18:20.758557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-07-24 23:18:20.768419] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.399 [2024-07-24 23:18:20.768498] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.399 [2024-07-24 23:18:20.768518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.399 [2024-07-24 23:18:20.768528] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.399 [2024-07-24 23:18:20.768536] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.399 [2024-07-24 23:18:20.768553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-07-24 23:18:20.778444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.399 [2024-07-24 23:18:20.778520] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.399 [2024-07-24 23:18:20.778539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.399 [2024-07-24 23:18:20.778549] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.399 [2024-07-24 23:18:20.778557] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.399 [2024-07-24 23:18:20.778573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-07-24 23:18:20.788513] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.399 [2024-07-24 23:18:20.788595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.399 [2024-07-24 23:18:20.788613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.399 [2024-07-24 23:18:20.788623] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.399 [2024-07-24 23:18:20.788631] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.399 [2024-07-24 23:18:20.788648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-07-24 23:18:20.798502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.399 [2024-07-24 23:18:20.798585] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.399 [2024-07-24 23:18:20.798603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.399 [2024-07-24 23:18:20.798613] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.399 [2024-07-24 23:18:20.798621] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.399 [2024-07-24 23:18:20.798638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-07-24 23:18:20.808474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.399 [2024-07-24 23:18:20.808551] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.399 [2024-07-24 23:18:20.808570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.399 [2024-07-24 23:18:20.808580] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.399 [2024-07-24 23:18:20.808588] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.399 [2024-07-24 23:18:20.808604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-07-24 23:18:20.818622] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.399 [2024-07-24 23:18:20.818699] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.399 [2024-07-24 23:18:20.818721] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.399 [2024-07-24 23:18:20.818732] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.399 [2024-07-24 23:18:20.818740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.399 [2024-07-24 23:18:20.818756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.658 [2024-07-24 23:18:20.828614] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.658 [2024-07-24 23:18:20.828693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.658 [2024-07-24 23:18:20.828711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.658 [2024-07-24 23:18:20.828726] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.658 [2024-07-24 23:18:20.828734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.658 [2024-07-24 23:18:20.828751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.658 qpair failed and we were unable to recover it. 00:32:48.658 [2024-07-24 23:18:20.838643] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.658 [2024-07-24 23:18:20.838730] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.658 [2024-07-24 23:18:20.838749] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.658 [2024-07-24 23:18:20.838759] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.658 [2024-07-24 23:18:20.838767] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.658 [2024-07-24 23:18:20.838784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.658 qpair failed and we were unable to recover it. 00:32:48.658 [2024-07-24 23:18:20.848590] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.658 [2024-07-24 23:18:20.848672] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.658 [2024-07-24 23:18:20.848694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.658 [2024-07-24 23:18:20.848703] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.658 [2024-07-24 23:18:20.848712] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.658 [2024-07-24 23:18:20.848733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.658 qpair failed and we were unable to recover it. 00:32:48.658 [2024-07-24 23:18:20.858703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.658 [2024-07-24 23:18:20.858783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.658 [2024-07-24 23:18:20.858803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.658 [2024-07-24 23:18:20.858812] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.658 [2024-07-24 23:18:20.858821] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.658 [2024-07-24 23:18:20.858837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.658 qpair failed and we were unable to recover it. 00:32:48.658 [2024-07-24 23:18:20.868749] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.659 [2024-07-24 23:18:20.868828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.659 [2024-07-24 23:18:20.868847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.659 [2024-07-24 23:18:20.868856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.659 [2024-07-24 23:18:20.868864] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.659 [2024-07-24 23:18:20.868881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.659 qpair failed and we were unable to recover it. 00:32:48.659 [2024-07-24 23:18:20.878800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.659 [2024-07-24 23:18:20.878902] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.659 [2024-07-24 23:18:20.878921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.659 [2024-07-24 23:18:20.878931] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.659 [2024-07-24 23:18:20.878939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.659 [2024-07-24 23:18:20.878956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.659 qpair failed and we were unable to recover it. 00:32:48.659 [2024-07-24 23:18:20.888727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.659 [2024-07-24 23:18:20.888806] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.659 [2024-07-24 23:18:20.888825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.659 [2024-07-24 23:18:20.888834] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.659 [2024-07-24 23:18:20.888842] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.659 [2024-07-24 23:18:20.888863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.659 qpair failed and we were unable to recover it. 00:32:48.659 [2024-07-24 23:18:20.898824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.659 [2024-07-24 23:18:20.898906] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.659 [2024-07-24 23:18:20.898924] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.659 [2024-07-24 23:18:20.898934] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.659 [2024-07-24 23:18:20.898942] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.659 [2024-07-24 23:18:20.898959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.659 qpair failed and we were unable to recover it. 00:32:48.659 [2024-07-24 23:18:20.908841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.659 [2024-07-24 23:18:20.908927] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.659 [2024-07-24 23:18:20.908945] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.659 [2024-07-24 23:18:20.908955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.659 [2024-07-24 23:18:20.908963] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.659 [2024-07-24 23:18:20.908980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.659 qpair failed and we were unable to recover it. 00:32:48.659 [2024-07-24 23:18:20.918899] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.659 [2024-07-24 23:18:20.918996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.659 [2024-07-24 23:18:20.919014] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.659 [2024-07-24 23:18:20.919023] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.659 [2024-07-24 23:18:20.919032] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2315f90 00:32:48.659 [2024-07-24 23:18:20.919049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.659 qpair failed and we were unable to recover it. 00:32:48.918 Controller properly reset. 00:32:48.918 Initializing NVMe Controllers 00:32:48.918 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:48.918 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:48.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:48.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:48.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:48.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:48.918 Initialization complete. Launching workers. 00:32:48.918 Starting thread on core 1 00:32:48.918 Starting thread on core 2 00:32:48.918 Starting thread on core 3 00:32:48.918 Starting thread on core 0 00:32:48.918 23:18:21 -- host/target_disconnect.sh@59 -- # sync 00:32:48.918 00:32:48.918 real 0m11.459s 00:32:48.918 user 0m20.494s 00:32:48.918 sys 0m4.965s 00:32:48.918 23:18:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:48.918 23:18:21 -- common/autotest_common.sh@10 -- # set +x 00:32:48.918 ************************************ 00:32:48.918 END TEST nvmf_target_disconnect_tc2 00:32:48.918 ************************************ 00:32:48.918 23:18:21 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:32:48.918 23:18:21 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:48.918 23:18:21 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:32:48.918 23:18:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:48.918 23:18:21 -- nvmf/common.sh@116 -- # sync 00:32:48.918 23:18:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:48.918 23:18:21 -- nvmf/common.sh@119 -- # set +e 00:32:48.918 23:18:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:48.918 23:18:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:48.918 rmmod nvme_tcp 00:32:48.918 rmmod nvme_fabrics 00:32:48.918 rmmod nvme_keyring 00:32:48.918 23:18:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:48.918 23:18:21 -- nvmf/common.sh@123 -- # set -e 00:32:48.918 23:18:21 -- nvmf/common.sh@124 -- # return 0 00:32:48.918 23:18:21 -- nvmf/common.sh@477 -- # '[' -n 3413991 ']' 00:32:48.918 23:18:21 -- nvmf/common.sh@478 -- # killprocess 3413991 00:32:48.918 23:18:21 -- common/autotest_common.sh@926 -- # '[' -z 3413991 ']' 00:32:48.918 23:18:21 -- common/autotest_common.sh@930 -- # kill -0 3413991 00:32:48.918 23:18:21 -- common/autotest_common.sh@931 -- # uname 00:32:48.918 23:18:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:48.918 23:18:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3413991 00:32:48.918 23:18:21 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:32:48.918 23:18:21 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:32:48.918 23:18:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3413991' 00:32:48.918 killing process with pid 3413991 00:32:48.918 23:18:21 -- common/autotest_common.sh@945 -- # kill 3413991 00:32:48.918 23:18:21 -- common/autotest_common.sh@950 -- # wait 3413991 00:32:49.177 23:18:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:49.177 23:18:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:49.177 23:18:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:49.177 23:18:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:49.177 23:18:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:49.177 23:18:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.177 23:18:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:49.177 23:18:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.711 23:18:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:51.711 00:32:51.711 real 0m21.060s 00:32:51.711 user 0m48.733s 00:32:51.711 sys 0m10.636s 00:32:51.711 23:18:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:51.711 23:18:23 -- common/autotest_common.sh@10 -- # set +x 00:32:51.711 ************************************ 00:32:51.711 END TEST nvmf_target_disconnect 00:32:51.711 ************************************ 00:32:51.711 23:18:23 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:32:51.711 23:18:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:51.711 23:18:23 -- common/autotest_common.sh@10 -- # set +x 00:32:51.711 23:18:23 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:32:51.711 00:32:51.711 real 25m55.801s 00:32:51.711 user 67m44.296s 00:32:51.711 sys 8m31.782s 00:32:51.711 23:18:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:51.711 23:18:23 -- common/autotest_common.sh@10 -- # set +x 00:32:51.711 ************************************ 00:32:51.711 END TEST nvmf_tcp 00:32:51.711 ************************************ 00:32:51.711 23:18:23 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:32:51.711 23:18:23 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:51.711 23:18:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:51.711 23:18:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:51.711 23:18:23 -- common/autotest_common.sh@10 -- # set +x 00:32:51.711 ************************************ 00:32:51.711 START TEST spdkcli_nvmf_tcp 00:32:51.711 ************************************ 00:32:51.711 23:18:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:51.711 * Looking for test storage... 00:32:51.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:51.711 23:18:23 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:51.711 23:18:23 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:51.711 23:18:23 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:51.711 23:18:23 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:51.711 23:18:23 -- nvmf/common.sh@7 -- # uname -s 00:32:51.711 23:18:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:51.711 23:18:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:51.711 23:18:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:51.711 23:18:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:51.711 23:18:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:51.711 23:18:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:51.711 23:18:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:51.711 23:18:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:51.711 23:18:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:51.711 23:18:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:51.711 23:18:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:51.711 23:18:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:51.711 23:18:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:51.711 23:18:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:51.711 23:18:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:51.711 23:18:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:51.711 23:18:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:51.711 23:18:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:51.711 23:18:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:51.711 23:18:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.712 23:18:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.712 23:18:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.712 23:18:23 -- paths/export.sh@5 -- # export PATH 00:32:51.712 23:18:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.712 23:18:23 -- nvmf/common.sh@46 -- # : 0 00:32:51.712 23:18:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:51.712 23:18:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:51.712 23:18:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:51.712 23:18:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:51.712 23:18:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:51.712 23:18:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:51.712 23:18:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:51.712 23:18:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:51.712 23:18:23 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:51.712 23:18:23 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:51.712 23:18:23 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:51.712 23:18:23 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:51.712 23:18:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:51.712 23:18:23 -- common/autotest_common.sh@10 -- # set +x 00:32:51.712 23:18:23 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:51.712 23:18:23 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3415729 00:32:51.712 23:18:23 -- spdkcli/common.sh@34 -- # waitforlisten 3415729 00:32:51.712 23:18:23 -- common/autotest_common.sh@819 -- # '[' -z 3415729 ']' 00:32:51.712 23:18:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.712 23:18:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:51.712 23:18:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.712 23:18:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:51.712 23:18:23 -- common/autotest_common.sh@10 -- # set +x 00:32:51.712 23:18:23 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:51.712 [2024-07-24 23:18:23.850876] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:51.712 [2024-07-24 23:18:23.850931] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3415729 ] 00:32:51.712 EAL: No free 2048 kB hugepages reported on node 1 00:32:51.712 [2024-07-24 23:18:23.921840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:51.712 [2024-07-24 23:18:23.960126] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:51.712 [2024-07-24 23:18:23.960270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.712 [2024-07-24 23:18:23.960272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.279 23:18:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:52.279 23:18:24 -- common/autotest_common.sh@852 -- # return 0 00:32:52.279 23:18:24 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:52.279 23:18:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:52.279 23:18:24 -- common/autotest_common.sh@10 -- # set +x 00:32:52.279 23:18:24 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:52.279 23:18:24 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:52.279 23:18:24 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:52.279 23:18:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:52.279 23:18:24 -- common/autotest_common.sh@10 -- # set +x 00:32:52.279 23:18:24 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:52.279 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:52.279 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:52.279 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:52.279 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:52.279 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:52.279 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:52.279 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:52.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:52.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:52.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:52.279 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:52.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:52.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:52.279 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:52.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:52.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:52.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:52.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:52.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:52.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:52.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:52.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:52.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:52.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:52.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:52.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:52.279 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:52.279 ' 00:32:52.846 [2024-07-24 23:18:25.015773] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:32:54.748 [2024-07-24 23:18:27.051809] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:56.123 [2024-07-24 23:18:28.227853] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:58.024 [2024-07-24 23:18:30.394749] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:59.923 [2024-07-24 23:18:32.252710] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:01.297 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:01.297 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:01.297 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:01.297 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:01.297 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:01.297 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:01.297 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:01.297 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:01.297 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:01.297 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:01.297 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:01.297 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:01.297 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:01.297 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:01.297 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:01.297 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:01.297 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:01.297 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:01.297 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:01.297 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:01.297 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:01.297 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:01.297 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:01.298 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:01.298 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:01.298 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:01.298 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:01.298 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:01.556 23:18:33 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:01.556 23:18:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:01.556 23:18:33 -- common/autotest_common.sh@10 -- # set +x 00:33:01.556 23:18:33 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:01.556 23:18:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:01.556 23:18:33 -- common/autotest_common.sh@10 -- # set +x 00:33:01.556 23:18:33 -- spdkcli/nvmf.sh@69 -- # check_match 00:33:01.556 23:18:33 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:01.814 23:18:34 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:01.814 23:18:34 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:01.814 23:18:34 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:01.814 23:18:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:01.814 23:18:34 -- common/autotest_common.sh@10 -- # set +x 00:33:02.072 23:18:34 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:02.072 23:18:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:02.072 23:18:34 -- common/autotest_common.sh@10 -- # set +x 00:33:02.072 23:18:34 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:02.072 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:02.072 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:02.073 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:02.073 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:02.073 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:02.073 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:02.073 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:02.073 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:02.073 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:02.073 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:02.073 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:02.073 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:02.073 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:02.073 ' 00:33:07.338 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:07.338 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:07.338 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:07.338 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:07.338 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:07.338 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:07.338 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:07.338 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:07.338 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:07.338 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:07.338 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:07.338 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:07.338 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:07.338 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:07.338 23:18:39 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:07.338 23:18:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:07.338 23:18:39 -- common/autotest_common.sh@10 -- # set +x 00:33:07.338 23:18:39 -- spdkcli/nvmf.sh@90 -- # killprocess 3415729 00:33:07.338 23:18:39 -- common/autotest_common.sh@926 -- # '[' -z 3415729 ']' 00:33:07.338 23:18:39 -- common/autotest_common.sh@930 -- # kill -0 3415729 00:33:07.338 23:18:39 -- common/autotest_common.sh@931 -- # uname 00:33:07.338 23:18:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:07.338 23:18:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3415729 00:33:07.338 23:18:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:07.338 23:18:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:07.338 23:18:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3415729' 00:33:07.338 killing process with pid 3415729 00:33:07.339 23:18:39 -- common/autotest_common.sh@945 -- # kill 3415729 00:33:07.339 [2024-07-24 23:18:39.747402] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:07.339 23:18:39 -- common/autotest_common.sh@950 -- # wait 3415729 00:33:07.597 23:18:39 -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:07.597 23:18:39 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:07.597 23:18:39 -- spdkcli/common.sh@13 -- # '[' -n 3415729 ']' 00:33:07.597 23:18:39 -- spdkcli/common.sh@14 -- # killprocess 3415729 00:33:07.597 23:18:39 -- common/autotest_common.sh@926 -- # '[' -z 3415729 ']' 00:33:07.597 23:18:39 -- common/autotest_common.sh@930 -- # kill -0 3415729 00:33:07.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3415729) - No such process 00:33:07.597 23:18:39 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3415729 is not found' 00:33:07.597 Process with pid 3415729 is not found 00:33:07.597 23:18:39 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:07.597 23:18:39 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:07.597 23:18:39 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:07.597 00:33:07.597 real 0m16.214s 00:33:07.597 user 0m33.991s 00:33:07.597 sys 0m0.924s 00:33:07.597 23:18:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:07.597 23:18:39 -- common/autotest_common.sh@10 -- # set +x 00:33:07.597 ************************************ 00:33:07.597 END TEST spdkcli_nvmf_tcp 00:33:07.597 ************************************ 00:33:07.597 23:18:39 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:07.597 23:18:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:07.597 23:18:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:07.597 23:18:39 -- common/autotest_common.sh@10 -- # set +x 00:33:07.597 ************************************ 00:33:07.597 START TEST nvmf_identify_passthru 00:33:07.597 ************************************ 00:33:07.597 23:18:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:07.883 * Looking for test storage... 00:33:07.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:07.883 23:18:40 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:07.883 23:18:40 -- nvmf/common.sh@7 -- # uname -s 00:33:07.883 23:18:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:07.883 23:18:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:07.883 23:18:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:07.883 23:18:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:07.883 23:18:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:07.883 23:18:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:07.883 23:18:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:07.883 23:18:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:07.883 23:18:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:07.883 23:18:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:07.883 23:18:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:33:07.883 23:18:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:33:07.883 23:18:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:07.883 23:18:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:07.883 23:18:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:07.883 23:18:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:07.883 23:18:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:07.883 23:18:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:07.883 23:18:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:07.883 23:18:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.883 23:18:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.883 23:18:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.883 23:18:40 -- paths/export.sh@5 -- # export PATH 00:33:07.883 23:18:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.883 23:18:40 -- nvmf/common.sh@46 -- # : 0 00:33:07.883 23:18:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:07.883 23:18:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:07.883 23:18:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:07.883 23:18:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:07.883 23:18:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:07.883 23:18:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:07.883 23:18:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:07.883 23:18:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:07.883 23:18:40 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:07.883 23:18:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:07.883 23:18:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:07.883 23:18:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:07.883 23:18:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.883 23:18:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.883 23:18:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.883 23:18:40 -- paths/export.sh@5 -- # export PATH 00:33:07.883 23:18:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.883 23:18:40 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:07.883 23:18:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:07.883 23:18:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:07.883 23:18:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:07.883 23:18:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:07.883 23:18:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:07.883 23:18:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.883 23:18:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:07.883 23:18:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.883 23:18:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:07.883 23:18:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:07.883 23:18:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:07.883 23:18:40 -- common/autotest_common.sh@10 -- # set +x 00:33:14.443 23:18:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:14.444 23:18:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:14.444 23:18:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:14.444 23:18:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:14.444 23:18:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:14.444 23:18:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:14.444 23:18:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:14.444 23:18:46 -- nvmf/common.sh@294 -- # net_devs=() 00:33:14.444 23:18:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:14.444 23:18:46 -- nvmf/common.sh@295 -- # e810=() 00:33:14.444 23:18:46 -- nvmf/common.sh@295 -- # local -ga e810 00:33:14.444 23:18:46 -- nvmf/common.sh@296 -- # x722=() 00:33:14.444 23:18:46 -- nvmf/common.sh@296 -- # local -ga x722 00:33:14.444 23:18:46 -- nvmf/common.sh@297 -- # mlx=() 00:33:14.444 23:18:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:14.444 23:18:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:14.444 23:18:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:14.444 23:18:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:14.444 23:18:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:14.444 23:18:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:14.444 23:18:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:14.444 23:18:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:14.444 23:18:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:14.444 23:18:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:14.444 23:18:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:14.444 23:18:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:14.444 23:18:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:14.444 23:18:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:14.444 23:18:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:14.444 23:18:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:14.444 23:18:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:14.444 23:18:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:14.444 23:18:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:14.444 23:18:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:14.444 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:14.444 23:18:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:14.444 23:18:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:14.444 23:18:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.444 23:18:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.444 23:18:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:14.444 23:18:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:14.444 23:18:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:14.444 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:14.444 23:18:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:14.444 23:18:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:14.444 23:18:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.444 23:18:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.444 23:18:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:14.444 23:18:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:14.444 23:18:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:14.444 23:18:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:14.444 23:18:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:14.444 23:18:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.444 23:18:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:14.444 23:18:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.444 23:18:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:14.444 Found net devices under 0000:af:00.0: cvl_0_0 00:33:14.444 23:18:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.444 23:18:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:14.444 23:18:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.444 23:18:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:14.444 23:18:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.444 23:18:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:14.444 Found net devices under 0000:af:00.1: cvl_0_1 00:33:14.444 23:18:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.444 23:18:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:14.444 23:18:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:14.444 23:18:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:14.444 23:18:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:14.444 23:18:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:14.444 23:18:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:14.444 23:18:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:14.444 23:18:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:14.444 23:18:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:14.444 23:18:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:14.444 23:18:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:14.444 23:18:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:14.444 23:18:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:14.444 23:18:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:14.444 23:18:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:14.444 23:18:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:14.444 23:18:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:14.444 23:18:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:14.444 23:18:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:14.444 23:18:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:14.444 23:18:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:14.444 23:18:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:14.444 23:18:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:14.444 23:18:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:14.444 23:18:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:14.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:14.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:33:14.444 00:33:14.444 --- 10.0.0.2 ping statistics --- 00:33:14.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.444 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:33:14.444 23:18:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:14.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:14.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:33:14.444 00:33:14.444 --- 10.0.0.1 ping statistics --- 00:33:14.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.444 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:33:14.444 23:18:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:14.444 23:18:46 -- nvmf/common.sh@410 -- # return 0 00:33:14.444 23:18:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:14.444 23:18:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:14.444 23:18:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:14.444 23:18:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:14.444 23:18:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:14.444 23:18:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:14.444 23:18:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:14.444 23:18:46 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:14.444 23:18:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:14.444 23:18:46 -- common/autotest_common.sh@10 -- # set +x 00:33:14.444 23:18:46 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:14.444 23:18:46 -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:14.444 23:18:46 -- common/autotest_common.sh@1509 -- # local bdfs 00:33:14.444 23:18:46 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:14.444 23:18:46 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:14.444 23:18:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:14.444 23:18:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:33:14.444 23:18:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:14.444 23:18:46 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:14.444 23:18:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:14.702 23:18:46 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:14.702 23:18:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:33:14.702 23:18:46 -- common/autotest_common.sh@1512 -- # echo 0000:d8:00.0 00:33:14.702 23:18:46 -- target/identify_passthru.sh@16 -- # bdf=0000:d8:00.0 00:33:14.702 23:18:46 -- target/identify_passthru.sh@17 -- # '[' -z 0000:d8:00.0 ']' 00:33:14.702 23:18:46 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:33:14.702 23:18:46 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:14.702 23:18:46 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:14.702 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.969 23:18:51 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLN916500W71P6AGN 00:33:19.969 23:18:51 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:33:19.969 23:18:51 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:19.969 23:18:51 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:19.969 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.158 23:18:56 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:24.158 23:18:56 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:24.158 23:18:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:24.158 23:18:56 -- common/autotest_common.sh@10 -- # set +x 00:33:24.158 23:18:56 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:24.158 23:18:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:24.158 23:18:56 -- common/autotest_common.sh@10 -- # set +x 00:33:24.158 23:18:56 -- target/identify_passthru.sh@31 -- # nvmfpid=3423241 00:33:24.158 23:18:56 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:24.158 23:18:56 -- target/identify_passthru.sh@35 -- # waitforlisten 3423241 00:33:24.158 23:18:56 -- common/autotest_common.sh@819 -- # '[' -z 3423241 ']' 00:33:24.158 23:18:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:24.158 23:18:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:24.158 23:18:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:24.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:24.158 23:18:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:24.158 23:18:56 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:24.158 23:18:56 -- common/autotest_common.sh@10 -- # set +x 00:33:24.158 [2024-07-24 23:18:56.503811] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:24.158 [2024-07-24 23:18:56.503863] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:24.158 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.158 [2024-07-24 23:18:56.577041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:24.416 [2024-07-24 23:18:56.616246] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:24.416 [2024-07-24 23:18:56.616356] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:24.416 [2024-07-24 23:18:56.616365] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:24.416 [2024-07-24 23:18:56.616374] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:24.416 [2024-07-24 23:18:56.616419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:24.416 [2024-07-24 23:18:56.616593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:24.416 [2024-07-24 23:18:56.616661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:24.416 [2024-07-24 23:18:56.616663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.983 23:18:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:24.983 23:18:57 -- common/autotest_common.sh@852 -- # return 0 00:33:24.983 23:18:57 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:24.983 23:18:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:24.983 23:18:57 -- common/autotest_common.sh@10 -- # set +x 00:33:24.983 INFO: Log level set to 20 00:33:24.983 INFO: Requests: 00:33:24.983 { 00:33:24.983 "jsonrpc": "2.0", 00:33:24.983 "method": "nvmf_set_config", 00:33:24.983 "id": 1, 00:33:24.983 "params": { 00:33:24.983 "admin_cmd_passthru": { 00:33:24.983 "identify_ctrlr": true 00:33:24.983 } 00:33:24.983 } 00:33:24.983 } 00:33:24.983 00:33:24.983 INFO: response: 00:33:24.983 { 00:33:24.983 "jsonrpc": "2.0", 00:33:24.983 "id": 1, 00:33:24.983 "result": true 00:33:24.983 } 00:33:24.983 00:33:24.983 23:18:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:24.983 23:18:57 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:24.983 23:18:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:24.983 23:18:57 -- common/autotest_common.sh@10 -- # set +x 00:33:24.983 INFO: Setting log level to 20 00:33:24.983 INFO: Setting log level to 20 00:33:24.983 INFO: Log level set to 20 00:33:24.983 INFO: Log level set to 20 00:33:24.983 INFO: Requests: 00:33:24.983 { 00:33:24.983 "jsonrpc": "2.0", 00:33:24.983 "method": "framework_start_init", 00:33:24.983 "id": 1 00:33:24.983 } 00:33:24.983 00:33:24.983 INFO: Requests: 00:33:24.983 { 00:33:24.983 "jsonrpc": "2.0", 00:33:24.983 "method": "framework_start_init", 00:33:24.983 "id": 1 00:33:24.983 } 00:33:24.983 00:33:24.983 [2024-07-24 23:18:57.396195] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:24.983 INFO: response: 00:33:24.983 { 00:33:24.983 "jsonrpc": "2.0", 00:33:24.983 "id": 1, 00:33:24.983 "result": true 00:33:24.983 } 00:33:24.983 00:33:24.983 INFO: response: 00:33:24.983 { 00:33:24.983 "jsonrpc": "2.0", 00:33:24.983 "id": 1, 00:33:24.983 "result": true 00:33:24.983 } 00:33:24.983 00:33:24.983 23:18:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:24.983 23:18:57 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:24.983 23:18:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:24.983 23:18:57 -- common/autotest_common.sh@10 -- # set +x 00:33:24.983 INFO: Setting log level to 40 00:33:24.983 INFO: Setting log level to 40 00:33:24.983 INFO: Setting log level to 40 00:33:24.983 [2024-07-24 23:18:57.409568] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:25.241 23:18:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:25.241 23:18:57 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:25.241 23:18:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:25.241 23:18:57 -- common/autotest_common.sh@10 -- # set +x 00:33:25.241 23:18:57 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:33:25.241 23:18:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:25.241 23:18:57 -- common/autotest_common.sh@10 -- # set +x 00:33:28.524 Nvme0n1 00:33:28.524 23:19:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:28.524 23:19:00 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:28.524 23:19:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:28.524 23:19:00 -- common/autotest_common.sh@10 -- # set +x 00:33:28.524 23:19:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:28.524 23:19:00 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:28.524 23:19:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:28.524 23:19:00 -- common/autotest_common.sh@10 -- # set +x 00:33:28.524 23:19:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:28.524 23:19:00 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:28.524 23:19:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:28.524 23:19:00 -- common/autotest_common.sh@10 -- # set +x 00:33:28.524 [2024-07-24 23:19:00.332710] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:28.524 23:19:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:28.524 23:19:00 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:28.524 23:19:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:28.524 23:19:00 -- common/autotest_common.sh@10 -- # set +x 00:33:28.524 [2024-07-24 23:19:00.340484] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:33:28.524 [ 00:33:28.524 { 00:33:28.524 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:28.524 "subtype": "Discovery", 00:33:28.524 "listen_addresses": [], 00:33:28.524 "allow_any_host": true, 00:33:28.524 "hosts": [] 00:33:28.524 }, 00:33:28.524 { 00:33:28.524 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:28.524 "subtype": "NVMe", 00:33:28.524 "listen_addresses": [ 00:33:28.524 { 00:33:28.524 "transport": "TCP", 00:33:28.524 "trtype": "TCP", 00:33:28.524 "adrfam": "IPv4", 00:33:28.524 "traddr": "10.0.0.2", 00:33:28.524 "trsvcid": "4420" 00:33:28.524 } 00:33:28.524 ], 00:33:28.524 "allow_any_host": true, 00:33:28.524 "hosts": [], 00:33:28.524 "serial_number": "SPDK00000000000001", 00:33:28.524 "model_number": "SPDK bdev Controller", 00:33:28.524 "max_namespaces": 1, 00:33:28.524 "min_cntlid": 1, 00:33:28.524 "max_cntlid": 65519, 00:33:28.524 "namespaces": [ 00:33:28.524 { 00:33:28.524 "nsid": 1, 00:33:28.524 "bdev_name": "Nvme0n1", 00:33:28.524 "name": "Nvme0n1", 00:33:28.524 "nguid": "95848184AB4D443BBB6C8F794CFD2BA3", 00:33:28.524 "uuid": "95848184-ab4d-443b-bb6c-8f794cfd2ba3" 00:33:28.524 } 00:33:28.524 ] 00:33:28.524 } 00:33:28.524 ] 00:33:28.524 23:19:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:28.524 23:19:00 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:28.524 23:19:00 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:28.524 23:19:00 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:28.524 EAL: No free 2048 kB hugepages reported on node 1 00:33:28.524 23:19:00 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLN916500W71P6AGN 00:33:28.524 23:19:00 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:28.524 23:19:00 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:28.524 23:19:00 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:28.524 EAL: No free 2048 kB hugepages reported on node 1 00:33:28.524 23:19:00 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:28.524 23:19:00 -- target/identify_passthru.sh@63 -- # '[' BTLN916500W71P6AGN '!=' BTLN916500W71P6AGN ']' 00:33:28.524 23:19:00 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:28.524 23:19:00 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:28.524 23:19:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:28.524 23:19:00 -- common/autotest_common.sh@10 -- # set +x 00:33:28.524 23:19:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:28.524 23:19:00 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:28.524 23:19:00 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:28.524 23:19:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:28.524 23:19:00 -- nvmf/common.sh@116 -- # sync 00:33:28.524 23:19:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:28.524 23:19:00 -- nvmf/common.sh@119 -- # set +e 00:33:28.524 23:19:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:28.524 23:19:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:28.524 rmmod nvme_tcp 00:33:28.524 rmmod nvme_fabrics 00:33:28.524 rmmod nvme_keyring 00:33:28.524 23:19:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:28.524 23:19:00 -- nvmf/common.sh@123 -- # set -e 00:33:28.524 23:19:00 -- nvmf/common.sh@124 -- # return 0 00:33:28.524 23:19:00 -- nvmf/common.sh@477 -- # '[' -n 3423241 ']' 00:33:28.524 23:19:00 -- nvmf/common.sh@478 -- # killprocess 3423241 00:33:28.524 23:19:00 -- common/autotest_common.sh@926 -- # '[' -z 3423241 ']' 00:33:28.524 23:19:00 -- common/autotest_common.sh@930 -- # kill -0 3423241 00:33:28.524 23:19:00 -- common/autotest_common.sh@931 -- # uname 00:33:28.524 23:19:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:28.524 23:19:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3423241 00:33:28.783 23:19:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:28.783 23:19:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:28.784 23:19:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3423241' 00:33:28.784 killing process with pid 3423241 00:33:28.784 23:19:00 -- common/autotest_common.sh@945 -- # kill 3423241 00:33:28.784 [2024-07-24 23:19:00.970810] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:28.784 23:19:00 -- common/autotest_common.sh@950 -- # wait 3423241 00:33:30.735 23:19:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:30.735 23:19:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:30.735 23:19:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:30.735 23:19:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:30.735 23:19:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:30.735 23:19:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:30.735 23:19:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:30.735 23:19:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:33.272 23:19:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:33.272 00:33:33.272 real 0m25.139s 00:33:33.272 user 0m33.944s 00:33:33.272 sys 0m6.529s 00:33:33.272 23:19:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:33.272 23:19:05 -- common/autotest_common.sh@10 -- # set +x 00:33:33.272 ************************************ 00:33:33.272 END TEST nvmf_identify_passthru 00:33:33.272 ************************************ 00:33:33.272 23:19:05 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:33.272 23:19:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:33.272 23:19:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:33.272 23:19:05 -- common/autotest_common.sh@10 -- # set +x 00:33:33.272 ************************************ 00:33:33.272 START TEST nvmf_dif 00:33:33.272 ************************************ 00:33:33.272 23:19:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:33.272 * Looking for test storage... 00:33:33.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:33.272 23:19:05 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:33.272 23:19:05 -- nvmf/common.sh@7 -- # uname -s 00:33:33.272 23:19:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:33.272 23:19:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:33.272 23:19:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:33.272 23:19:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:33.272 23:19:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:33.272 23:19:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:33.272 23:19:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:33.272 23:19:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:33.272 23:19:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:33.272 23:19:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:33.272 23:19:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:33:33.272 23:19:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:33:33.272 23:19:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:33.272 23:19:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:33.272 23:19:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:33.272 23:19:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:33.272 23:19:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:33.272 23:19:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:33.272 23:19:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:33.272 23:19:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.272 23:19:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.272 23:19:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.272 23:19:05 -- paths/export.sh@5 -- # export PATH 00:33:33.272 23:19:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.272 23:19:05 -- nvmf/common.sh@46 -- # : 0 00:33:33.272 23:19:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:33.272 23:19:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:33.272 23:19:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:33.272 23:19:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:33.272 23:19:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:33.272 23:19:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:33.272 23:19:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:33.272 23:19:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:33.272 23:19:05 -- target/dif.sh@15 -- # NULL_META=16 00:33:33.272 23:19:05 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:33.272 23:19:05 -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:33.272 23:19:05 -- target/dif.sh@15 -- # NULL_DIF=1 00:33:33.272 23:19:05 -- target/dif.sh@135 -- # nvmftestinit 00:33:33.272 23:19:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:33.272 23:19:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:33.272 23:19:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:33.272 23:19:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:33.272 23:19:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:33.272 23:19:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:33.272 23:19:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:33.272 23:19:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:33.272 23:19:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:33.272 23:19:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:33.272 23:19:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:33.272 23:19:05 -- common/autotest_common.sh@10 -- # set +x 00:33:39.844 23:19:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:39.844 23:19:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:39.844 23:19:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:39.844 23:19:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:39.844 23:19:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:39.844 23:19:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:39.844 23:19:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:39.844 23:19:11 -- nvmf/common.sh@294 -- # net_devs=() 00:33:39.844 23:19:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:39.844 23:19:11 -- nvmf/common.sh@295 -- # e810=() 00:33:39.844 23:19:11 -- nvmf/common.sh@295 -- # local -ga e810 00:33:39.844 23:19:11 -- nvmf/common.sh@296 -- # x722=() 00:33:39.844 23:19:11 -- nvmf/common.sh@296 -- # local -ga x722 00:33:39.844 23:19:11 -- nvmf/common.sh@297 -- # mlx=() 00:33:39.844 23:19:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:39.844 23:19:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:39.844 23:19:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:39.844 23:19:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:39.844 23:19:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:39.844 23:19:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:39.844 23:19:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:39.844 23:19:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:39.844 23:19:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:39.844 23:19:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:39.844 23:19:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:39.844 23:19:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:39.844 23:19:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:39.844 23:19:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:39.844 23:19:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:39.844 23:19:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:39.844 23:19:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:39.844 23:19:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:39.844 23:19:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:39.844 23:19:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:39.845 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:39.845 23:19:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:39.845 23:19:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:39.845 23:19:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.845 23:19:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.845 23:19:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:39.845 23:19:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:39.845 23:19:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:39.845 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:39.845 23:19:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:39.845 23:19:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:39.845 23:19:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.845 23:19:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.845 23:19:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:39.845 23:19:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:39.845 23:19:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:39.845 23:19:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:39.845 23:19:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:39.845 23:19:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.845 23:19:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:39.845 23:19:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.845 23:19:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:39.845 Found net devices under 0000:af:00.0: cvl_0_0 00:33:39.845 23:19:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.845 23:19:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:39.845 23:19:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.845 23:19:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:39.845 23:19:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.845 23:19:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:39.845 Found net devices under 0000:af:00.1: cvl_0_1 00:33:39.845 23:19:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.845 23:19:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:39.845 23:19:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:39.845 23:19:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:39.845 23:19:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:39.845 23:19:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:39.845 23:19:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:39.845 23:19:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:39.845 23:19:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:39.845 23:19:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:39.845 23:19:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:39.845 23:19:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:39.845 23:19:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:39.845 23:19:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:39.845 23:19:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:39.845 23:19:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:39.845 23:19:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:39.845 23:19:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:39.845 23:19:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:39.845 23:19:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:39.845 23:19:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:39.845 23:19:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:39.845 23:19:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:39.845 23:19:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:39.845 23:19:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:39.845 23:19:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:39.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:39.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:33:39.845 00:33:39.845 --- 10.0.0.2 ping statistics --- 00:33:39.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.845 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:33:39.845 23:19:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:39.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:39.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:33:39.845 00:33:39.845 --- 10.0.0.1 ping statistics --- 00:33:39.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.845 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:33:39.845 23:19:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:39.845 23:19:11 -- nvmf/common.sh@410 -- # return 0 00:33:39.845 23:19:11 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:33:39.845 23:19:11 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:43.136 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:43.136 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:43.136 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:43.136 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:43.136 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:43.136 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:43.136 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:43.136 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:43.136 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:43.136 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:43.136 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:43.137 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:43.137 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:43.137 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:43.137 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:43.137 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:43.137 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:43.137 23:19:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:43.137 23:19:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:43.137 23:19:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:43.137 23:19:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:43.137 23:19:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:43.137 23:19:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:43.137 23:19:15 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:43.137 23:19:15 -- target/dif.sh@137 -- # nvmfappstart 00:33:43.137 23:19:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:43.137 23:19:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:43.137 23:19:15 -- common/autotest_common.sh@10 -- # set +x 00:33:43.137 23:19:15 -- nvmf/common.sh@469 -- # nvmfpid=3429331 00:33:43.137 23:19:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:43.137 23:19:15 -- nvmf/common.sh@470 -- # waitforlisten 3429331 00:33:43.137 23:19:15 -- common/autotest_common.sh@819 -- # '[' -z 3429331 ']' 00:33:43.137 23:19:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:43.137 23:19:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:43.137 23:19:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:43.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:43.137 23:19:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:43.137 23:19:15 -- common/autotest_common.sh@10 -- # set +x 00:33:43.137 [2024-07-24 23:19:15.207789] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:43.137 [2024-07-24 23:19:15.207837] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:43.137 EAL: No free 2048 kB hugepages reported on node 1 00:33:43.137 [2024-07-24 23:19:15.284979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.137 [2024-07-24 23:19:15.322941] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:43.137 [2024-07-24 23:19:15.323051] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:43.137 [2024-07-24 23:19:15.323061] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:43.137 [2024-07-24 23:19:15.323070] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:43.137 [2024-07-24 23:19:15.323089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:43.706 23:19:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:43.706 23:19:15 -- common/autotest_common.sh@852 -- # return 0 00:33:43.706 23:19:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:43.706 23:19:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:43.706 23:19:15 -- common/autotest_common.sh@10 -- # set +x 00:33:43.706 23:19:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:43.706 23:19:16 -- target/dif.sh@139 -- # create_transport 00:33:43.706 23:19:16 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:43.706 23:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:43.706 23:19:16 -- common/autotest_common.sh@10 -- # set +x 00:33:43.706 [2024-07-24 23:19:16.033215] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:43.706 23:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:43.706 23:19:16 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:43.706 23:19:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:43.706 23:19:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:43.706 23:19:16 -- common/autotest_common.sh@10 -- # set +x 00:33:43.706 ************************************ 00:33:43.706 START TEST fio_dif_1_default 00:33:43.706 ************************************ 00:33:43.706 23:19:16 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:33:43.706 23:19:16 -- target/dif.sh@86 -- # create_subsystems 0 00:33:43.706 23:19:16 -- target/dif.sh@28 -- # local sub 00:33:43.706 23:19:16 -- target/dif.sh@30 -- # for sub in "$@" 00:33:43.706 23:19:16 -- target/dif.sh@31 -- # create_subsystem 0 00:33:43.706 23:19:16 -- target/dif.sh@18 -- # local sub_id=0 00:33:43.706 23:19:16 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:43.706 23:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:43.706 23:19:16 -- common/autotest_common.sh@10 -- # set +x 00:33:43.706 bdev_null0 00:33:43.706 23:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:43.706 23:19:16 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:43.706 23:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:43.706 23:19:16 -- common/autotest_common.sh@10 -- # set +x 00:33:43.706 23:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:43.706 23:19:16 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:43.706 23:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:43.706 23:19:16 -- common/autotest_common.sh@10 -- # set +x 00:33:43.706 23:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:43.706 23:19:16 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:43.706 23:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:43.706 23:19:16 -- common/autotest_common.sh@10 -- # set +x 00:33:43.706 [2024-07-24 23:19:16.077473] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:43.706 23:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:43.706 23:19:16 -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:43.706 23:19:16 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:43.706 23:19:16 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:43.706 23:19:16 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:43.706 23:19:16 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:43.706 23:19:16 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:43.706 23:19:16 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:43.706 23:19:16 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:43.706 23:19:16 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:43.706 23:19:16 -- common/autotest_common.sh@1320 -- # shift 00:33:43.706 23:19:16 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:43.706 23:19:16 -- nvmf/common.sh@520 -- # config=() 00:33:43.706 23:19:16 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:43.706 23:19:16 -- target/dif.sh@82 -- # gen_fio_conf 00:33:43.706 23:19:16 -- nvmf/common.sh@520 -- # local subsystem config 00:33:43.706 23:19:16 -- target/dif.sh@54 -- # local file 00:33:43.706 23:19:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:43.706 23:19:16 -- target/dif.sh@56 -- # cat 00:33:43.706 23:19:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:43.706 { 00:33:43.706 "params": { 00:33:43.706 "name": "Nvme$subsystem", 00:33:43.706 "trtype": "$TEST_TRANSPORT", 00:33:43.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:43.706 "adrfam": "ipv4", 00:33:43.706 "trsvcid": "$NVMF_PORT", 00:33:43.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:43.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:43.706 "hdgst": ${hdgst:-false}, 00:33:43.706 "ddgst": ${ddgst:-false} 00:33:43.706 }, 00:33:43.706 "method": "bdev_nvme_attach_controller" 00:33:43.706 } 00:33:43.706 EOF 00:33:43.706 )") 00:33:43.706 23:19:16 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:43.706 23:19:16 -- nvmf/common.sh@542 -- # cat 00:33:43.706 23:19:16 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:43.706 23:19:16 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:43.706 23:19:16 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:43.706 23:19:16 -- target/dif.sh@72 -- # (( file <= files )) 00:33:43.706 23:19:16 -- nvmf/common.sh@544 -- # jq . 00:33:43.706 23:19:16 -- nvmf/common.sh@545 -- # IFS=, 00:33:43.706 23:19:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:43.706 "params": { 00:33:43.706 "name": "Nvme0", 00:33:43.706 "trtype": "tcp", 00:33:43.706 "traddr": "10.0.0.2", 00:33:43.706 "adrfam": "ipv4", 00:33:43.706 "trsvcid": "4420", 00:33:43.706 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:43.706 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:43.706 "hdgst": false, 00:33:43.706 "ddgst": false 00:33:43.706 }, 00:33:43.706 "method": "bdev_nvme_attach_controller" 00:33:43.706 }' 00:33:43.706 23:19:16 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:43.706 23:19:16 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:43.706 23:19:16 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:43.706 23:19:16 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:43.706 23:19:16 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:43.706 23:19:16 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:43.977 23:19:16 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:43.977 23:19:16 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:43.977 23:19:16 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:43.977 23:19:16 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:44.236 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:44.236 fio-3.35 00:33:44.236 Starting 1 thread 00:33:44.236 EAL: No free 2048 kB hugepages reported on node 1 00:33:44.801 [2024-07-24 23:19:16.998922] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:44.801 [2024-07-24 23:19:16.998969] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:54.757 00:33:54.757 filename0: (groupid=0, jobs=1): err= 0: pid=3429760: Wed Jul 24 23:19:27 2024 00:33:54.757 read: IOPS=188, BW=755KiB/s (773kB/s)(7568KiB/10028msec) 00:33:54.757 slat (nsec): min=5667, max=32966, avg=5920.19, stdev=1019.01 00:33:54.757 clat (usec): min=705, max=46551, avg=21183.60, stdev=20319.56 00:33:54.757 lat (usec): min=711, max=46577, avg=21189.52, stdev=20319.52 00:33:54.757 clat percentiles (usec): 00:33:54.757 | 1.00th=[ 799], 5.00th=[ 816], 10.00th=[ 816], 20.00th=[ 824], 00:33:54.757 | 30.00th=[ 832], 40.00th=[ 840], 50.00th=[41157], 60.00th=[41157], 00:33:54.757 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:33:54.757 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:33:54.757 | 99.99th=[46400] 00:33:54.757 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=755.20, stdev=26.27, samples=20 00:33:54.757 iops : min= 176, max= 192, avg=188.80, stdev= 6.57, samples=20 00:33:54.757 lat (usec) : 750=0.21%, 1000=49.47% 00:33:54.757 lat (msec) : 2=0.21%, 50=50.11% 00:33:54.757 cpu : usr=86.34%, sys=13.41%, ctx=14, majf=0, minf=268 00:33:54.757 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.757 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.757 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:54.757 00:33:54.757 Run status group 0 (all jobs): 00:33:54.757 READ: bw=755KiB/s (773kB/s), 755KiB/s-755KiB/s (773kB/s-773kB/s), io=7568KiB (7750kB), run=10028-10028msec 00:33:55.015 23:19:27 -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:55.015 23:19:27 -- target/dif.sh@43 -- # local sub 00:33:55.015 23:19:27 -- target/dif.sh@45 -- # for sub in "$@" 00:33:55.015 23:19:27 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:55.015 23:19:27 -- target/dif.sh@36 -- # local sub_id=0 00:33:55.015 23:19:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:55.015 23:19:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:55.015 23:19:27 -- common/autotest_common.sh@10 -- # set +x 00:33:55.015 23:19:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:55.015 23:19:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:55.015 23:19:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:55.015 23:19:27 -- common/autotest_common.sh@10 -- # set +x 00:33:55.015 23:19:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:55.015 00:33:55.015 real 0m11.271s 00:33:55.015 user 0m17.123s 00:33:55.015 sys 0m1.747s 00:33:55.015 23:19:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:55.015 23:19:27 -- common/autotest_common.sh@10 -- # set +x 00:33:55.015 ************************************ 00:33:55.015 END TEST fio_dif_1_default 00:33:55.015 ************************************ 00:33:55.015 23:19:27 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:55.015 23:19:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:55.015 23:19:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:55.015 23:19:27 -- common/autotest_common.sh@10 -- # set +x 00:33:55.015 ************************************ 00:33:55.015 START TEST fio_dif_1_multi_subsystems 00:33:55.015 ************************************ 00:33:55.015 23:19:27 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:33:55.015 23:19:27 -- target/dif.sh@92 -- # local files=1 00:33:55.015 23:19:27 -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:55.015 23:19:27 -- target/dif.sh@28 -- # local sub 00:33:55.015 23:19:27 -- target/dif.sh@30 -- # for sub in "$@" 00:33:55.015 23:19:27 -- target/dif.sh@31 -- # create_subsystem 0 00:33:55.015 23:19:27 -- target/dif.sh@18 -- # local sub_id=0 00:33:55.015 23:19:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:55.015 23:19:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:55.015 23:19:27 -- common/autotest_common.sh@10 -- # set +x 00:33:55.015 bdev_null0 00:33:55.015 23:19:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:55.015 23:19:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:55.015 23:19:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:55.015 23:19:27 -- common/autotest_common.sh@10 -- # set +x 00:33:55.015 23:19:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:55.015 23:19:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:55.015 23:19:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:55.015 23:19:27 -- common/autotest_common.sh@10 -- # set +x 00:33:55.015 23:19:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:55.015 23:19:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:55.015 23:19:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:55.015 23:19:27 -- common/autotest_common.sh@10 -- # set +x 00:33:55.015 [2024-07-24 23:19:27.387620] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:55.015 23:19:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:55.015 23:19:27 -- target/dif.sh@30 -- # for sub in "$@" 00:33:55.015 23:19:27 -- target/dif.sh@31 -- # create_subsystem 1 00:33:55.015 23:19:27 -- target/dif.sh@18 -- # local sub_id=1 00:33:55.015 23:19:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:55.015 23:19:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:55.015 23:19:27 -- common/autotest_common.sh@10 -- # set +x 00:33:55.015 bdev_null1 00:33:55.015 23:19:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:55.015 23:19:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:55.015 23:19:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:55.015 23:19:27 -- common/autotest_common.sh@10 -- # set +x 00:33:55.015 23:19:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:55.015 23:19:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:55.015 23:19:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:55.015 23:19:27 -- common/autotest_common.sh@10 -- # set +x 00:33:55.015 23:19:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:55.015 23:19:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:55.015 23:19:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:55.015 23:19:27 -- common/autotest_common.sh@10 -- # set +x 00:33:55.016 23:19:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:55.016 23:19:27 -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:55.016 23:19:27 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:55.016 23:19:27 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:55.016 23:19:27 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:55.016 23:19:27 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:55.016 23:19:27 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:55.016 23:19:27 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:55.016 23:19:27 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:55.016 23:19:27 -- common/autotest_common.sh@1320 -- # shift 00:33:55.016 23:19:27 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:55.016 23:19:27 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:55.016 23:19:27 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:55.016 23:19:27 -- target/dif.sh@82 -- # gen_fio_conf 00:33:55.016 23:19:27 -- nvmf/common.sh@520 -- # config=() 00:33:55.016 23:19:27 -- target/dif.sh@54 -- # local file 00:33:55.016 23:19:27 -- nvmf/common.sh@520 -- # local subsystem config 00:33:55.016 23:19:27 -- target/dif.sh@56 -- # cat 00:33:55.016 23:19:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:55.016 23:19:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:55.016 { 00:33:55.016 "params": { 00:33:55.016 "name": "Nvme$subsystem", 00:33:55.016 "trtype": "$TEST_TRANSPORT", 00:33:55.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:55.016 "adrfam": "ipv4", 00:33:55.016 "trsvcid": "$NVMF_PORT", 00:33:55.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:55.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:55.016 "hdgst": ${hdgst:-false}, 00:33:55.016 "ddgst": ${ddgst:-false} 00:33:55.016 }, 00:33:55.016 "method": "bdev_nvme_attach_controller" 00:33:55.016 } 00:33:55.016 EOF 00:33:55.016 )") 00:33:55.016 23:19:27 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:55.016 23:19:27 -- nvmf/common.sh@542 -- # cat 00:33:55.016 23:19:27 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:55.016 23:19:27 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:55.016 23:19:27 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:55.016 23:19:27 -- target/dif.sh@72 -- # (( file <= files )) 00:33:55.016 23:19:27 -- target/dif.sh@73 -- # cat 00:33:55.016 23:19:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:55.016 23:19:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:55.016 { 00:33:55.016 "params": { 00:33:55.016 "name": "Nvme$subsystem", 00:33:55.016 "trtype": "$TEST_TRANSPORT", 00:33:55.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:55.016 "adrfam": "ipv4", 00:33:55.016 "trsvcid": "$NVMF_PORT", 00:33:55.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:55.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:55.016 "hdgst": ${hdgst:-false}, 00:33:55.016 "ddgst": ${ddgst:-false} 00:33:55.016 }, 00:33:55.016 "method": "bdev_nvme_attach_controller" 00:33:55.016 } 00:33:55.016 EOF 00:33:55.016 )") 00:33:55.016 23:19:27 -- target/dif.sh@72 -- # (( file++ )) 00:33:55.016 23:19:27 -- target/dif.sh@72 -- # (( file <= files )) 00:33:55.016 23:19:27 -- nvmf/common.sh@542 -- # cat 00:33:55.016 23:19:27 -- nvmf/common.sh@544 -- # jq . 00:33:55.283 23:19:27 -- nvmf/common.sh@545 -- # IFS=, 00:33:55.283 23:19:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:55.283 "params": { 00:33:55.283 "name": "Nvme0", 00:33:55.283 "trtype": "tcp", 00:33:55.283 "traddr": "10.0.0.2", 00:33:55.283 "adrfam": "ipv4", 00:33:55.283 "trsvcid": "4420", 00:33:55.283 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:55.283 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:55.283 "hdgst": false, 00:33:55.283 "ddgst": false 00:33:55.283 }, 00:33:55.283 "method": "bdev_nvme_attach_controller" 00:33:55.283 },{ 00:33:55.283 "params": { 00:33:55.283 "name": "Nvme1", 00:33:55.283 "trtype": "tcp", 00:33:55.283 "traddr": "10.0.0.2", 00:33:55.283 "adrfam": "ipv4", 00:33:55.283 "trsvcid": "4420", 00:33:55.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:55.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:55.283 "hdgst": false, 00:33:55.283 "ddgst": false 00:33:55.283 }, 00:33:55.283 "method": "bdev_nvme_attach_controller" 00:33:55.283 }' 00:33:55.283 23:19:27 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:55.283 23:19:27 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:55.283 23:19:27 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:55.283 23:19:27 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:55.283 23:19:27 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:55.283 23:19:27 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:55.283 23:19:27 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:55.283 23:19:27 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:55.283 23:19:27 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:55.283 23:19:27 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:55.543 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:55.543 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:55.543 fio-3.35 00:33:55.543 Starting 2 threads 00:33:55.543 EAL: No free 2048 kB hugepages reported on node 1 00:33:56.110 [2024-07-24 23:19:28.475454] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:56.110 [2024-07-24 23:19:28.475504] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:08.354 00:34:08.354 filename0: (groupid=0, jobs=1): err= 0: pid=3431784: Wed Jul 24 23:19:38 2024 00:34:08.354 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10009msec) 00:34:08.354 slat (nsec): min=5666, max=25362, avg=7308.10, stdev=2454.86 00:34:08.354 clat (usec): min=40867, max=43028, avg=41508.41, stdev=517.17 00:34:08.354 lat (usec): min=40873, max=43039, avg=41515.72, stdev=517.33 00:34:08.354 clat percentiles (usec): 00:34:08.354 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:08.354 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:34:08.354 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:08.354 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:34:08.354 | 99.99th=[43254] 00:34:08.354 bw ( KiB/s): min= 352, max= 416, per=33.69%, avg=384.00, stdev=14.68, samples=20 00:34:08.354 iops : min= 88, max= 104, avg=96.00, stdev= 3.67, samples=20 00:34:08.354 lat (msec) : 50=100.00% 00:34:08.354 cpu : usr=94.68%, sys=5.10%, ctx=10, majf=0, minf=212 00:34:08.354 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:08.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.354 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.354 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:08.354 filename1: (groupid=0, jobs=1): err= 0: pid=3431785: Wed Jul 24 23:19:38 2024 00:34:08.354 read: IOPS=188, BW=754KiB/s (773kB/s)(7552KiB/10010msec) 00:34:08.354 slat (nsec): min=5643, max=25296, avg=6690.24, stdev=1840.41 00:34:08.354 clat (usec): min=464, max=42722, avg=21188.06, stdev=20174.04 00:34:08.354 lat (usec): min=470, max=42747, avg=21194.75, stdev=20173.52 00:34:08.354 clat percentiles (usec): 00:34:08.354 | 1.00th=[ 611], 5.00th=[ 824], 10.00th=[ 832], 20.00th=[ 840], 00:34:08.354 | 30.00th=[ 848], 40.00th=[ 857], 50.00th=[41157], 60.00th=[41157], 00:34:08.354 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:08.354 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:34:08.354 | 99.99th=[42730] 00:34:08.354 bw ( KiB/s): min= 672, max= 768, per=66.07%, avg=753.60, stdev=26.42, samples=20 00:34:08.354 iops : min= 168, max= 192, avg=188.40, stdev= 6.60, samples=20 00:34:08.354 lat (usec) : 500=0.21%, 750=1.48%, 1000=47.46% 00:34:08.354 lat (msec) : 2=0.42%, 50=50.42% 00:34:08.354 cpu : usr=94.15%, sys=5.62%, ctx=14, majf=0, minf=84 00:34:08.354 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:08.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.354 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.354 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:08.354 00:34:08.354 Run status group 0 (all jobs): 00:34:08.354 READ: bw=1140KiB/s (1167kB/s), 385KiB/s-754KiB/s (394kB/s-773kB/s), io=11.1MiB (11.7MB), run=10009-10010msec 00:34:08.354 23:19:38 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:08.354 23:19:38 -- target/dif.sh@43 -- # local sub 00:34:08.354 23:19:38 -- target/dif.sh@45 -- # for sub in "$@" 00:34:08.354 23:19:38 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:08.354 23:19:38 -- target/dif.sh@36 -- # local sub_id=0 00:34:08.354 23:19:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:08.354 23:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:08.354 23:19:38 -- common/autotest_common.sh@10 -- # set +x 00:34:08.354 23:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:08.354 23:19:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:08.354 23:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:08.354 23:19:38 -- common/autotest_common.sh@10 -- # set +x 00:34:08.354 23:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:08.354 23:19:38 -- target/dif.sh@45 -- # for sub in "$@" 00:34:08.354 23:19:38 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:08.354 23:19:38 -- target/dif.sh@36 -- # local sub_id=1 00:34:08.354 23:19:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:08.354 23:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:08.354 23:19:38 -- common/autotest_common.sh@10 -- # set +x 00:34:08.354 23:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:08.354 23:19:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:08.354 23:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:08.354 23:19:38 -- common/autotest_common.sh@10 -- # set +x 00:34:08.354 23:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:08.354 00:34:08.354 real 0m11.437s 00:34:08.354 user 0m27.944s 00:34:08.354 sys 0m1.448s 00:34:08.354 23:19:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:08.354 23:19:38 -- common/autotest_common.sh@10 -- # set +x 00:34:08.354 ************************************ 00:34:08.354 END TEST fio_dif_1_multi_subsystems 00:34:08.354 ************************************ 00:34:08.354 23:19:38 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:08.354 23:19:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:08.354 23:19:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:08.354 23:19:38 -- common/autotest_common.sh@10 -- # set +x 00:34:08.354 ************************************ 00:34:08.354 START TEST fio_dif_rand_params 00:34:08.354 ************************************ 00:34:08.354 23:19:38 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:34:08.354 23:19:38 -- target/dif.sh@100 -- # local NULL_DIF 00:34:08.354 23:19:38 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:08.354 23:19:38 -- target/dif.sh@103 -- # NULL_DIF=3 00:34:08.354 23:19:38 -- target/dif.sh@103 -- # bs=128k 00:34:08.354 23:19:38 -- target/dif.sh@103 -- # numjobs=3 00:34:08.354 23:19:38 -- target/dif.sh@103 -- # iodepth=3 00:34:08.354 23:19:38 -- target/dif.sh@103 -- # runtime=5 00:34:08.354 23:19:38 -- target/dif.sh@105 -- # create_subsystems 0 00:34:08.354 23:19:38 -- target/dif.sh@28 -- # local sub 00:34:08.354 23:19:38 -- target/dif.sh@30 -- # for sub in "$@" 00:34:08.354 23:19:38 -- target/dif.sh@31 -- # create_subsystem 0 00:34:08.354 23:19:38 -- target/dif.sh@18 -- # local sub_id=0 00:34:08.354 23:19:38 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:08.354 23:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:08.354 23:19:38 -- common/autotest_common.sh@10 -- # set +x 00:34:08.354 bdev_null0 00:34:08.354 23:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:08.354 23:19:38 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:08.354 23:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:08.354 23:19:38 -- common/autotest_common.sh@10 -- # set +x 00:34:08.354 23:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:08.354 23:19:38 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:08.354 23:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:08.355 23:19:38 -- common/autotest_common.sh@10 -- # set +x 00:34:08.355 23:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:08.355 23:19:38 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:08.355 23:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:08.355 23:19:38 -- common/autotest_common.sh@10 -- # set +x 00:34:08.355 [2024-07-24 23:19:38.873431] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:08.355 23:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:08.355 23:19:38 -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:08.355 23:19:38 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:08.355 23:19:38 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:08.355 23:19:38 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:08.355 23:19:38 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:08.355 23:19:38 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:08.355 23:19:38 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:08.355 23:19:38 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:08.355 23:19:38 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:08.355 23:19:38 -- common/autotest_common.sh@1320 -- # shift 00:34:08.355 23:19:38 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:08.355 23:19:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:08.355 23:19:38 -- nvmf/common.sh@520 -- # config=() 00:34:08.355 23:19:38 -- target/dif.sh@82 -- # gen_fio_conf 00:34:08.355 23:19:38 -- nvmf/common.sh@520 -- # local subsystem config 00:34:08.355 23:19:38 -- target/dif.sh@54 -- # local file 00:34:08.355 23:19:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:08.355 23:19:38 -- target/dif.sh@56 -- # cat 00:34:08.355 23:19:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:08.355 { 00:34:08.355 "params": { 00:34:08.355 "name": "Nvme$subsystem", 00:34:08.355 "trtype": "$TEST_TRANSPORT", 00:34:08.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:08.355 "adrfam": "ipv4", 00:34:08.355 "trsvcid": "$NVMF_PORT", 00:34:08.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:08.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:08.355 "hdgst": ${hdgst:-false}, 00:34:08.355 "ddgst": ${ddgst:-false} 00:34:08.355 }, 00:34:08.355 "method": "bdev_nvme_attach_controller" 00:34:08.355 } 00:34:08.355 EOF 00:34:08.355 )") 00:34:08.355 23:19:38 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:08.355 23:19:38 -- nvmf/common.sh@542 -- # cat 00:34:08.355 23:19:38 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:08.355 23:19:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:08.355 23:19:38 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:08.355 23:19:38 -- target/dif.sh@72 -- # (( file <= files )) 00:34:08.355 23:19:38 -- nvmf/common.sh@544 -- # jq . 00:34:08.355 23:19:38 -- nvmf/common.sh@545 -- # IFS=, 00:34:08.355 23:19:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:08.355 "params": { 00:34:08.355 "name": "Nvme0", 00:34:08.355 "trtype": "tcp", 00:34:08.355 "traddr": "10.0.0.2", 00:34:08.355 "adrfam": "ipv4", 00:34:08.355 "trsvcid": "4420", 00:34:08.355 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:08.355 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:08.355 "hdgst": false, 00:34:08.355 "ddgst": false 00:34:08.355 }, 00:34:08.355 "method": "bdev_nvme_attach_controller" 00:34:08.355 }' 00:34:08.355 23:19:38 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:08.355 23:19:38 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:08.355 23:19:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:08.355 23:19:38 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:08.355 23:19:38 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:08.355 23:19:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:08.355 23:19:38 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:08.355 23:19:38 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:08.355 23:19:38 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:08.355 23:19:38 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:08.355 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:08.355 ... 00:34:08.355 fio-3.35 00:34:08.355 Starting 3 threads 00:34:08.355 EAL: No free 2048 kB hugepages reported on node 1 00:34:08.355 [2024-07-24 23:19:39.695309] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:08.355 [2024-07-24 23:19:39.695361] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:12.541 00:34:12.541 filename0: (groupid=0, jobs=1): err= 0: pid=3433796: Wed Jul 24 23:19:44 2024 00:34:12.541 read: IOPS=357, BW=44.7MiB/s (46.8MB/s)(224MiB/5004msec) 00:34:12.541 slat (nsec): min=3933, max=31584, avg=8351.06, stdev=2241.92 00:34:12.541 clat (usec): min=3761, max=87860, avg=8383.11, stdev=9379.56 00:34:12.541 lat (usec): min=3769, max=87868, avg=8391.46, stdev=9379.71 00:34:12.541 clat percentiles (usec): 00:34:12.541 | 1.00th=[ 3949], 5.00th=[ 4293], 10.00th=[ 4555], 20.00th=[ 4948], 00:34:12.541 | 30.00th=[ 5276], 40.00th=[ 5735], 50.00th=[ 6259], 60.00th=[ 6718], 00:34:12.541 | 70.00th=[ 7111], 80.00th=[ 7898], 90.00th=[ 8979], 95.00th=[16581], 00:34:12.541 | 99.00th=[48497], 99.50th=[49546], 99.90th=[87557], 99.95th=[87557], 00:34:12.541 | 99.99th=[87557] 00:34:12.541 bw ( KiB/s): min=38400, max=48896, per=46.67%, avg=44913.78, stdev=3410.93, samples=9 00:34:12.541 iops : min= 300, max= 382, avg=350.89, stdev=26.65, samples=9 00:34:12.541 lat (msec) : 4=1.40%, 10=92.56%, 20=1.12%, 50=4.64%, 100=0.28% 00:34:12.541 cpu : usr=90.07%, sys=9.53%, ctx=12, majf=0, minf=85 00:34:12.541 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:12.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.541 issued rwts: total=1788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.541 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:12.541 filename0: (groupid=0, jobs=1): err= 0: pid=3433797: Wed Jul 24 23:19:44 2024 00:34:12.541 read: IOPS=245, BW=30.6MiB/s (32.1MB/s)(153MiB/5005msec) 00:34:12.541 slat (nsec): min=5855, max=35395, avg=8736.44, stdev=2503.07 00:34:12.541 clat (usec): min=3600, max=91217, avg=12222.60, stdev=14608.87 00:34:12.541 lat (usec): min=3606, max=91226, avg=12231.34, stdev=14609.08 00:34:12.541 clat percentiles (usec): 00:34:12.541 | 1.00th=[ 4047], 5.00th=[ 4490], 10.00th=[ 5014], 20.00th=[ 5669], 00:34:12.541 | 30.00th=[ 6325], 40.00th=[ 6718], 50.00th=[ 7111], 60.00th=[ 7635], 00:34:12.541 | 70.00th=[ 8291], 80.00th=[ 8979], 90.00th=[47449], 95.00th=[49021], 00:34:12.541 | 99.00th=[50070], 99.50th=[56886], 99.90th=[90702], 99.95th=[90702], 00:34:12.541 | 99.99th=[90702] 00:34:12.541 bw ( KiB/s): min=22272, max=46848, per=33.55%, avg=32284.44, stdev=8856.49, samples=9 00:34:12.541 iops : min= 174, max= 366, avg=252.22, stdev=69.19, samples=9 00:34:12.541 lat (msec) : 4=0.90%, 10=85.17%, 20=1.71%, 50=10.84%, 100=1.39% 00:34:12.541 cpu : usr=92.31%, sys=7.35%, ctx=6, majf=0, minf=163 00:34:12.541 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:12.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.541 issued rwts: total=1227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.541 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:12.541 filename0: (groupid=0, jobs=1): err= 0: pid=3433798: Wed Jul 24 23:19:44 2024 00:34:12.541 read: IOPS=149, BW=18.7MiB/s (19.6MB/s)(93.5MiB/5001msec) 00:34:12.541 slat (usec): min=5, max=103, avg= 9.06, stdev= 4.25 00:34:12.541 clat (usec): min=3805, max=97660, avg=20042.50, stdev=20357.25 00:34:12.541 lat (usec): min=3812, max=97671, avg=20051.56, stdev=20357.84 00:34:12.541 clat percentiles (usec): 00:34:12.541 | 1.00th=[ 4178], 5.00th=[ 4621], 10.00th=[ 5669], 20.00th=[ 6718], 00:34:12.541 | 30.00th=[ 7701], 40.00th=[ 9241], 50.00th=[10552], 60.00th=[11863], 00:34:12.541 | 70.00th=[13304], 80.00th=[49546], 90.00th=[52691], 95.00th=[54264], 00:34:12.541 | 99.00th=[93848], 99.50th=[93848], 99.90th=[98042], 99.95th=[98042], 00:34:12.541 | 99.99th=[98042] 00:34:12.541 bw ( KiB/s): min= 9984, max=33024, per=20.07%, avg=19313.78, stdev=7009.81, samples=9 00:34:12.541 iops : min= 78, max= 258, avg=150.89, stdev=54.76, samples=9 00:34:12.541 lat (msec) : 4=0.27%, 10=45.59%, 20=30.08%, 50=4.68%, 100=19.39% 00:34:12.541 cpu : usr=92.72%, sys=6.96%, ctx=7, majf=0, minf=38 00:34:12.541 IO depths : 1=2.5%, 2=97.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:12.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.541 issued rwts: total=748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.541 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:12.541 00:34:12.541 Run status group 0 (all jobs): 00:34:12.541 READ: bw=94.0MiB/s (98.5MB/s), 18.7MiB/s-44.7MiB/s (19.6MB/s-46.8MB/s), io=470MiB (493MB), run=5001-5005msec 00:34:12.801 23:19:44 -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:12.801 23:19:44 -- target/dif.sh@43 -- # local sub 00:34:12.801 23:19:44 -- target/dif.sh@45 -- # for sub in "$@" 00:34:12.801 23:19:44 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:12.801 23:19:44 -- target/dif.sh@36 -- # local sub_id=0 00:34:12.801 23:19:44 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:12.801 23:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:12.801 23:19:44 -- common/autotest_common.sh@10 -- # set +x 00:34:12.801 23:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:12.801 23:19:44 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:12.801 23:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:12.801 23:19:44 -- common/autotest_common.sh@10 -- # set +x 00:34:12.801 23:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:12.801 23:19:44 -- target/dif.sh@109 -- # NULL_DIF=2 00:34:12.801 23:19:44 -- target/dif.sh@109 -- # bs=4k 00:34:12.801 23:19:44 -- target/dif.sh@109 -- # numjobs=8 00:34:12.801 23:19:44 -- target/dif.sh@109 -- # iodepth=16 00:34:12.801 23:19:44 -- target/dif.sh@109 -- # runtime= 00:34:12.801 23:19:44 -- target/dif.sh@109 -- # files=2 00:34:12.801 23:19:44 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:12.801 23:19:44 -- target/dif.sh@28 -- # local sub 00:34:12.801 23:19:44 -- target/dif.sh@30 -- # for sub in "$@" 00:34:12.801 23:19:44 -- target/dif.sh@31 -- # create_subsystem 0 00:34:12.801 23:19:44 -- target/dif.sh@18 -- # local sub_id=0 00:34:12.801 23:19:44 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:12.801 23:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:12.801 23:19:44 -- common/autotest_common.sh@10 -- # set +x 00:34:12.801 bdev_null0 00:34:12.801 23:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:12.801 23:19:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:12.801 23:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:12.801 23:19:45 -- common/autotest_common.sh@10 -- # set +x 00:34:12.801 23:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:12.801 23:19:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:12.801 23:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:12.801 23:19:45 -- common/autotest_common.sh@10 -- # set +x 00:34:12.801 23:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:12.801 23:19:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:12.801 23:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:12.801 23:19:45 -- common/autotest_common.sh@10 -- # set +x 00:34:12.801 [2024-07-24 23:19:45.020885] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.801 23:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:12.801 23:19:45 -- target/dif.sh@30 -- # for sub in "$@" 00:34:12.801 23:19:45 -- target/dif.sh@31 -- # create_subsystem 1 00:34:12.801 23:19:45 -- target/dif.sh@18 -- # local sub_id=1 00:34:12.801 23:19:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:12.801 23:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:12.801 23:19:45 -- common/autotest_common.sh@10 -- # set +x 00:34:12.801 bdev_null1 00:34:12.802 23:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:12.802 23:19:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:12.802 23:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:12.802 23:19:45 -- common/autotest_common.sh@10 -- # set +x 00:34:12.802 23:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:12.802 23:19:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:12.802 23:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:12.802 23:19:45 -- common/autotest_common.sh@10 -- # set +x 00:34:12.802 23:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:12.802 23:19:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:12.802 23:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:12.802 23:19:45 -- common/autotest_common.sh@10 -- # set +x 00:34:12.802 23:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:12.802 23:19:45 -- target/dif.sh@30 -- # for sub in "$@" 00:34:12.802 23:19:45 -- target/dif.sh@31 -- # create_subsystem 2 00:34:12.802 23:19:45 -- target/dif.sh@18 -- # local sub_id=2 00:34:12.802 23:19:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:12.802 23:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:12.802 23:19:45 -- common/autotest_common.sh@10 -- # set +x 00:34:12.802 bdev_null2 00:34:12.802 23:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:12.802 23:19:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:12.802 23:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:12.802 23:19:45 -- common/autotest_common.sh@10 -- # set +x 00:34:12.802 23:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:12.802 23:19:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:12.802 23:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:12.802 23:19:45 -- common/autotest_common.sh@10 -- # set +x 00:34:12.802 23:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:12.802 23:19:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:12.802 23:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:12.802 23:19:45 -- common/autotest_common.sh@10 -- # set +x 00:34:12.802 23:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:12.802 23:19:45 -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:12.802 23:19:45 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:12.802 23:19:45 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:12.802 23:19:45 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:12.802 23:19:45 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:12.802 23:19:45 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:12.802 23:19:45 -- nvmf/common.sh@520 -- # config=() 00:34:12.802 23:19:45 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:12.802 23:19:45 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:12.802 23:19:45 -- nvmf/common.sh@520 -- # local subsystem config 00:34:12.802 23:19:45 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:12.802 23:19:45 -- common/autotest_common.sh@1320 -- # shift 00:34:12.802 23:19:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:12.802 23:19:45 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:12.802 23:19:45 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:12.802 23:19:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:12.802 { 00:34:12.802 "params": { 00:34:12.802 "name": "Nvme$subsystem", 00:34:12.802 "trtype": "$TEST_TRANSPORT", 00:34:12.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:12.802 "adrfam": "ipv4", 00:34:12.802 "trsvcid": "$NVMF_PORT", 00:34:12.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:12.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:12.802 "hdgst": ${hdgst:-false}, 00:34:12.802 "ddgst": ${ddgst:-false} 00:34:12.802 }, 00:34:12.802 "method": "bdev_nvme_attach_controller" 00:34:12.802 } 00:34:12.802 EOF 00:34:12.802 )") 00:34:12.802 23:19:45 -- target/dif.sh@82 -- # gen_fio_conf 00:34:12.802 23:19:45 -- target/dif.sh@54 -- # local file 00:34:12.802 23:19:45 -- target/dif.sh@56 -- # cat 00:34:12.802 23:19:45 -- nvmf/common.sh@542 -- # cat 00:34:12.802 23:19:45 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:12.802 23:19:45 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:12.802 23:19:45 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:12.802 23:19:45 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:12.802 23:19:45 -- target/dif.sh@72 -- # (( file <= files )) 00:34:12.802 23:19:45 -- target/dif.sh@73 -- # cat 00:34:12.802 23:19:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:12.802 23:19:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:12.802 { 00:34:12.802 "params": { 00:34:12.802 "name": "Nvme$subsystem", 00:34:12.802 "trtype": "$TEST_TRANSPORT", 00:34:12.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:12.802 "adrfam": "ipv4", 00:34:12.802 "trsvcid": "$NVMF_PORT", 00:34:12.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:12.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:12.802 "hdgst": ${hdgst:-false}, 00:34:12.802 "ddgst": ${ddgst:-false} 00:34:12.802 }, 00:34:12.802 "method": "bdev_nvme_attach_controller" 00:34:12.802 } 00:34:12.802 EOF 00:34:12.802 )") 00:34:12.802 23:19:45 -- target/dif.sh@72 -- # (( file++ )) 00:34:12.802 23:19:45 -- nvmf/common.sh@542 -- # cat 00:34:12.802 23:19:45 -- target/dif.sh@72 -- # (( file <= files )) 00:34:12.802 23:19:45 -- target/dif.sh@73 -- # cat 00:34:12.802 23:19:45 -- target/dif.sh@72 -- # (( file++ )) 00:34:12.802 23:19:45 -- target/dif.sh@72 -- # (( file <= files )) 00:34:12.802 23:19:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:12.802 23:19:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:12.802 { 00:34:12.802 "params": { 00:34:12.802 "name": "Nvme$subsystem", 00:34:12.802 "trtype": "$TEST_TRANSPORT", 00:34:12.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:12.802 "adrfam": "ipv4", 00:34:12.802 "trsvcid": "$NVMF_PORT", 00:34:12.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:12.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:12.802 "hdgst": ${hdgst:-false}, 00:34:12.802 "ddgst": ${ddgst:-false} 00:34:12.802 }, 00:34:12.802 "method": "bdev_nvme_attach_controller" 00:34:12.802 } 00:34:12.802 EOF 00:34:12.802 )") 00:34:12.802 23:19:45 -- nvmf/common.sh@542 -- # cat 00:34:12.802 23:19:45 -- nvmf/common.sh@544 -- # jq . 00:34:12.802 23:19:45 -- nvmf/common.sh@545 -- # IFS=, 00:34:12.802 23:19:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:12.802 "params": { 00:34:12.802 "name": "Nvme0", 00:34:12.802 "trtype": "tcp", 00:34:12.802 "traddr": "10.0.0.2", 00:34:12.802 "adrfam": "ipv4", 00:34:12.802 "trsvcid": "4420", 00:34:12.802 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:12.802 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:12.802 "hdgst": false, 00:34:12.802 "ddgst": false 00:34:12.802 }, 00:34:12.802 "method": "bdev_nvme_attach_controller" 00:34:12.802 },{ 00:34:12.802 "params": { 00:34:12.802 "name": "Nvme1", 00:34:12.802 "trtype": "tcp", 00:34:12.802 "traddr": "10.0.0.2", 00:34:12.802 "adrfam": "ipv4", 00:34:12.802 "trsvcid": "4420", 00:34:12.802 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:12.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:12.802 "hdgst": false, 00:34:12.802 "ddgst": false 00:34:12.802 }, 00:34:12.802 "method": "bdev_nvme_attach_controller" 00:34:12.802 },{ 00:34:12.802 "params": { 00:34:12.802 "name": "Nvme2", 00:34:12.802 "trtype": "tcp", 00:34:12.802 "traddr": "10.0.0.2", 00:34:12.802 "adrfam": "ipv4", 00:34:12.802 "trsvcid": "4420", 00:34:12.802 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:12.802 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:12.802 "hdgst": false, 00:34:12.802 "ddgst": false 00:34:12.802 }, 00:34:12.802 "method": "bdev_nvme_attach_controller" 00:34:12.802 }' 00:34:12.802 23:19:45 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:12.802 23:19:45 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:12.802 23:19:45 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:12.802 23:19:45 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:12.802 23:19:45 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:12.802 23:19:45 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:12.802 23:19:45 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:12.802 23:19:45 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:12.802 23:19:45 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:12.802 23:19:45 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:13.061 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:13.061 ... 00:34:13.061 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:13.061 ... 00:34:13.061 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:13.061 ... 00:34:13.061 fio-3.35 00:34:13.061 Starting 24 threads 00:34:13.320 EAL: No free 2048 kB hugepages reported on node 1 00:34:14.256 [2024-07-24 23:19:46.402644] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:14.256 [2024-07-24 23:19:46.402688] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:24.227 00:34:24.227 filename0: (groupid=0, jobs=1): err= 0: pid=3435010: Wed Jul 24 23:19:56 2024 00:34:24.227 read: IOPS=664, BW=2660KiB/s (2724kB/s)(26.0MiB/10022msec) 00:34:24.227 slat (usec): min=6, max=125, avg=18.22, stdev=15.19 00:34:24.227 clat (usec): min=4439, max=43860, avg=23928.52, stdev=3761.08 00:34:24.227 lat (usec): min=4453, max=43879, avg=23946.75, stdev=3762.34 00:34:24.227 clat percentiles (usec): 00:34:24.227 | 1.00th=[ 7570], 5.00th=[15795], 10.00th=[22414], 20.00th=[23725], 00:34:24.227 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:34:24.227 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[26608], 00:34:24.227 | 99.00th=[35914], 99.50th=[38011], 99.90th=[40109], 99.95th=[42206], 00:34:24.227 | 99.99th=[43779] 00:34:24.227 bw ( KiB/s): min= 2512, max= 3088, per=4.33%, avg=2659.15, stdev=143.68, samples=20 00:34:24.227 iops : min= 628, max= 772, avg=664.75, stdev=35.95, samples=20 00:34:24.227 lat (msec) : 10=1.68%, 20=6.18%, 50=92.14% 00:34:24.227 cpu : usr=97.37%, sys=2.27%, ctx=22, majf=0, minf=71 00:34:24.227 IO depths : 1=4.4%, 2=8.8%, 4=19.2%, 8=58.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:34:24.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.227 complete : 0=0.0%, 4=92.9%, 8=2.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.227 issued rwts: total=6664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.227 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.227 filename0: (groupid=0, jobs=1): err= 0: pid=3435011: Wed Jul 24 23:19:56 2024 00:34:24.227 read: IOPS=642, BW=2570KiB/s (2631kB/s)(25.1MiB/10012msec) 00:34:24.227 slat (usec): min=6, max=121, avg=37.08, stdev=20.32 00:34:24.227 clat (usec): min=16065, max=39585, avg=24598.77, stdev=1340.50 00:34:24.227 lat (usec): min=16073, max=39618, avg=24635.86, stdev=1337.72 00:34:24.227 clat percentiles (usec): 00:34:24.227 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23725], 20.00th=[23987], 00:34:24.227 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:34:24.227 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25822], 00:34:24.227 | 99.00th=[29754], 99.50th=[32900], 99.90th=[39584], 99.95th=[39584], 00:34:24.227 | 99.99th=[39584] 00:34:24.227 bw ( KiB/s): min= 2427, max= 2688, per=4.18%, avg=2566.47, stdev=67.67, samples=19 00:34:24.227 iops : min= 606, max= 672, avg=641.58, stdev=17.00, samples=19 00:34:24.227 lat (msec) : 20=0.40%, 50=99.60% 00:34:24.227 cpu : usr=97.82%, sys=1.82%, ctx=21, majf=0, minf=47 00:34:24.227 IO depths : 1=6.2%, 2=12.3%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:24.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.227 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.227 issued rwts: total=6432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.227 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.227 filename0: (groupid=0, jobs=1): err= 0: pid=3435012: Wed Jul 24 23:19:56 2024 00:34:24.227 read: IOPS=639, BW=2559KiB/s (2620kB/s)(25.0MiB/10012msec) 00:34:24.227 slat (usec): min=6, max=126, avg=29.27, stdev=20.87 00:34:24.227 clat (usec): min=5157, max=46298, avg=24806.70, stdev=3551.19 00:34:24.227 lat (usec): min=5165, max=46305, avg=24835.97, stdev=3551.38 00:34:24.227 clat percentiles (usec): 00:34:24.227 | 1.00th=[13566], 5.00th=[20579], 10.00th=[23200], 20.00th=[23725], 00:34:24.227 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:34:24.227 | 70.00th=[25035], 80.00th=[25297], 90.00th=[26346], 95.00th=[30540], 00:34:24.227 | 99.00th=[39584], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:34:24.227 | 99.99th=[46400] 00:34:24.227 bw ( KiB/s): min= 2384, max= 2688, per=4.16%, avg=2555.05, stdev=84.82, samples=19 00:34:24.227 iops : min= 596, max= 672, avg=638.74, stdev=21.22, samples=19 00:34:24.227 lat (msec) : 10=0.23%, 20=4.29%, 50=95.47% 00:34:24.227 cpu : usr=97.85%, sys=1.73%, ctx=37, majf=0, minf=75 00:34:24.227 IO depths : 1=2.7%, 2=5.7%, 4=16.4%, 8=64.7%, 16=10.6%, 32=0.0%, >=64=0.0% 00:34:24.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.227 complete : 0=0.0%, 4=92.3%, 8=2.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.227 issued rwts: total=6405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.227 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.227 filename0: (groupid=0, jobs=1): err= 0: pid=3435013: Wed Jul 24 23:19:56 2024 00:34:24.227 read: IOPS=616, BW=2466KiB/s (2525kB/s)(24.1MiB/10003msec) 00:34:24.227 slat (usec): min=4, max=115, avg=22.77, stdev=19.33 00:34:24.227 clat (usec): min=4447, max=48244, avg=25811.11, stdev=5308.59 00:34:24.227 lat (usec): min=4454, max=48253, avg=25833.88, stdev=5307.04 00:34:24.227 clat percentiles (usec): 00:34:24.227 | 1.00th=[10945], 5.00th=[18482], 10.00th=[23200], 20.00th=[23725], 00:34:24.227 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[25035], 00:34:24.227 | 70.00th=[25560], 80.00th=[27132], 90.00th=[33424], 95.00th=[36963], 00:34:24.227 | 99.00th=[42206], 99.50th=[42730], 99.90th=[45351], 99.95th=[45876], 00:34:24.227 | 99.99th=[48497] 00:34:24.227 bw ( KiB/s): min= 2052, max= 2608, per=3.98%, avg=2447.37, stdev=143.17, samples=19 00:34:24.227 iops : min= 513, max= 652, avg=611.74, stdev=35.80, samples=19 00:34:24.227 lat (msec) : 10=0.78%, 20=5.22%, 50=94.00% 00:34:24.227 cpu : usr=97.69%, sys=1.94%, ctx=19, majf=0, minf=96 00:34:24.227 IO depths : 1=1.0%, 2=2.3%, 4=11.7%, 8=71.3%, 16=13.7%, 32=0.0%, >=64=0.0% 00:34:24.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.228 complete : 0=0.0%, 4=91.5%, 8=4.9%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.228 issued rwts: total=6166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.228 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.228 filename0: (groupid=0, jobs=1): err= 0: pid=3435014: Wed Jul 24 23:19:56 2024 00:34:24.228 read: IOPS=641, BW=2565KiB/s (2627kB/s)(25.1MiB/10005msec) 00:34:24.228 slat (usec): min=5, max=127, avg=28.88, stdev=21.60 00:34:24.228 clat (usec): min=8871, max=42633, avg=24730.66, stdev=2802.29 00:34:24.228 lat (usec): min=8879, max=42656, avg=24759.54, stdev=2802.37 00:34:24.228 clat percentiles (usec): 00:34:24.228 | 1.00th=[15795], 5.00th=[22938], 10.00th=[23462], 20.00th=[23987], 00:34:24.228 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:34:24.228 | 70.00th=[24773], 80.00th=[25297], 90.00th=[25560], 95.00th=[28181], 00:34:24.228 | 99.00th=[36439], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:34:24.228 | 99.99th=[42730] 00:34:24.228 bw ( KiB/s): min= 2400, max= 2656, per=4.16%, avg=2559.05, stdev=52.51, samples=19 00:34:24.228 iops : min= 600, max= 664, avg=639.68, stdev=13.15, samples=19 00:34:24.228 lat (msec) : 10=0.06%, 20=2.95%, 50=96.99% 00:34:24.228 cpu : usr=97.81%, sys=1.83%, ctx=22, majf=0, minf=76 00:34:24.228 IO depths : 1=2.4%, 2=4.7%, 4=13.1%, 8=67.3%, 16=12.6%, 32=0.0%, >=64=0.0% 00:34:24.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.228 complete : 0=0.0%, 4=91.9%, 8=4.8%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.228 issued rwts: total=6416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.228 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.228 filename0: (groupid=0, jobs=1): err= 0: pid=3435015: Wed Jul 24 23:19:56 2024 00:34:24.228 read: IOPS=643, BW=2574KiB/s (2636kB/s)(25.2MiB/10014msec) 00:34:24.228 slat (usec): min=6, max=123, avg=26.85, stdev=21.09 00:34:24.228 clat (usec): min=7077, max=53620, avg=24662.99, stdev=3510.68 00:34:24.228 lat (usec): min=7084, max=53653, avg=24689.84, stdev=3510.73 00:34:24.228 clat percentiles (usec): 00:34:24.228 | 1.00th=[13829], 5.00th=[20841], 10.00th=[23200], 20.00th=[23725], 00:34:24.228 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:34:24.228 | 70.00th=[25035], 80.00th=[25297], 90.00th=[25822], 95.00th=[28181], 00:34:24.228 | 99.00th=[38536], 99.50th=[42730], 99.90th=[53216], 99.95th=[53740], 00:34:24.228 | 99.99th=[53740] 00:34:24.228 bw ( KiB/s): min= 2304, max= 2746, per=4.18%, avg=2571.47, stdev=90.30, samples=19 00:34:24.228 iops : min= 576, max= 686, avg=642.84, stdev=22.52, samples=19 00:34:24.228 lat (msec) : 10=0.34%, 20=3.93%, 50=95.48%, 100=0.25% 00:34:24.228 cpu : usr=97.68%, sys=1.94%, ctx=21, majf=0, minf=86 00:34:24.228 IO depths : 1=1.9%, 2=4.5%, 4=15.4%, 8=65.9%, 16=12.3%, 32=0.0%, >=64=0.0% 00:34:24.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.228 complete : 0=0.0%, 4=92.7%, 8=2.9%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.228 issued rwts: total=6444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.228 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.228 filename0: (groupid=0, jobs=1): err= 0: pid=3435016: Wed Jul 24 23:19:56 2024 00:34:24.228 read: IOPS=650, BW=2601KiB/s (2663kB/s)(25.4MiB/10018msec) 00:34:24.228 slat (usec): min=6, max=133, avg=29.05, stdev=21.16 00:34:24.228 clat (usec): min=3784, max=43763, avg=24383.28, stdev=3087.07 00:34:24.228 lat (usec): min=3791, max=43791, avg=24412.33, stdev=3086.75 00:34:24.228 clat percentiles (usec): 00:34:24.228 | 1.00th=[ 9634], 5.00th=[22676], 10.00th=[23200], 20.00th=[23725], 00:34:24.228 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:34:24.228 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[26346], 00:34:24.228 | 99.00th=[35914], 99.50th=[38011], 99.90th=[43779], 99.95th=[43779], 00:34:24.228 | 99.99th=[43779] 00:34:24.228 bw ( KiB/s): min= 2554, max= 2904, per=4.22%, avg=2593.79, stdev=85.55, samples=19 00:34:24.228 iops : min= 638, max= 726, avg=648.42, stdev=21.40, samples=19 00:34:24.228 lat (msec) : 4=0.06%, 10=0.97%, 20=2.24%, 50=96.73% 00:34:24.228 cpu : usr=97.71%, sys=1.92%, ctx=23, majf=0, minf=64 00:34:24.228 IO depths : 1=4.8%, 2=9.9%, 4=21.8%, 8=55.4%, 16=8.1%, 32=0.0%, >=64=0.0% 00:34:24.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.228 complete : 0=0.0%, 4=93.7%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.228 issued rwts: total=6513,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.228 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.228 filename0: (groupid=0, jobs=1): err= 0: pid=3435017: Wed Jul 24 23:19:56 2024 00:34:24.228 read: IOPS=646, BW=2585KiB/s (2647kB/s)(25.3MiB/10013msec) 00:34:24.228 slat (usec): min=6, max=134, avg=34.03, stdev=22.76 00:34:24.228 clat (usec): min=10216, max=51663, avg=24456.02, stdev=2477.82 00:34:24.228 lat (usec): min=10223, max=51688, avg=24490.05, stdev=2477.56 00:34:24.228 clat percentiles (usec): 00:34:24.228 | 1.00th=[15270], 5.00th=[22414], 10.00th=[23200], 20.00th=[23725], 00:34:24.228 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:34:24.228 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[26084], 00:34:24.228 | 99.00th=[33162], 99.50th=[40109], 99.90th=[48497], 99.95th=[51643], 00:34:24.228 | 99.99th=[51643] 00:34:24.228 bw ( KiB/s): min= 2432, max= 2698, per=4.19%, avg=2576.21, stdev=65.49, samples=19 00:34:24.228 iops : min= 608, max= 674, avg=644.00, stdev=16.34, samples=19 00:34:24.228 lat (msec) : 20=3.23%, 50=96.69%, 100=0.08% 00:34:24.228 cpu : usr=97.38%, sys=2.26%, ctx=18, majf=0, minf=86 00:34:24.228 IO depths : 1=4.0%, 2=8.1%, 4=17.7%, 8=60.3%, 16=9.8%, 32=0.0%, >=64=0.0% 00:34:24.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.228 complete : 0=0.0%, 4=92.6%, 8=3.0%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.228 issued rwts: total=6472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.228 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.228 filename1: (groupid=0, jobs=1): err= 0: pid=3435018: Wed Jul 24 23:19:56 2024 00:34:24.228 read: IOPS=636, BW=2546KiB/s (2607kB/s)(24.9MiB/10009msec) 00:34:24.228 slat (usec): min=6, max=121, avg=26.08, stdev=21.44 00:34:24.228 clat (usec): min=7528, max=45760, avg=24959.08, stdev=4047.46 00:34:24.228 lat (usec): min=7535, max=45777, avg=24985.15, stdev=4047.15 00:34:24.228 clat percentiles (usec): 00:34:24.228 | 1.00th=[12911], 5.00th=[19530], 10.00th=[22938], 20.00th=[23725], 00:34:24.228 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24511], 60.00th=[24773], 00:34:24.228 | 70.00th=[25035], 80.00th=[25297], 90.00th=[27919], 95.00th=[32900], 00:34:24.228 | 99.00th=[40109], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:34:24.228 | 99.99th=[45876] 00:34:24.228 bw ( KiB/s): min= 2352, max= 2688, per=4.12%, avg=2533.26, stdev=77.51, samples=19 00:34:24.228 iops : min= 588, max= 672, avg=633.26, stdev=19.38, samples=19 00:34:24.228 lat (msec) : 10=0.53%, 20=4.82%, 50=94.65% 00:34:24.228 cpu : usr=97.65%, sys=1.94%, ctx=21, majf=0, minf=67 00:34:24.228 IO depths : 1=1.3%, 2=2.9%, 4=10.4%, 8=72.1%, 16=13.3%, 32=0.0%, >=64=0.0% 00:34:24.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.228 complete : 0=0.0%, 4=91.0%, 8=5.2%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.228 issued rwts: total=6370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.228 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.228 filename1: (groupid=0, jobs=1): err= 0: pid=3435019: Wed Jul 24 23:19:56 2024 00:34:24.228 read: IOPS=646, BW=2586KiB/s (2649kB/s)(25.3MiB/10012msec) 00:34:24.228 slat (usec): min=6, max=137, avg=32.03, stdev=22.20 00:34:24.228 clat (usec): min=8227, max=44965, avg=24482.64, stdev=2634.07 00:34:24.228 lat (usec): min=8247, max=44971, avg=24514.67, stdev=2634.65 00:34:24.228 clat percentiles (usec): 00:34:24.228 | 1.00th=[15533], 5.00th=[22414], 10.00th=[23462], 20.00th=[23725], 00:34:24.228 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:34:24.228 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[26346], 00:34:24.228 | 99.00th=[35390], 99.50th=[40633], 99.90th=[42730], 99.95th=[44827], 00:34:24.228 | 99.99th=[44827] 00:34:24.228 bw ( KiB/s): min= 2352, max= 2906, per=4.21%, avg=2584.11, stdev=113.36, samples=19 00:34:24.228 iops : min= 588, max= 726, avg=646.00, stdev=28.26, samples=19 00:34:24.228 lat (msec) : 10=0.15%, 20=3.78%, 50=96.06% 00:34:24.228 cpu : usr=97.60%, sys=2.04%, ctx=21, majf=0, minf=51 00:34:24.228 IO depths : 1=4.5%, 2=9.1%, 4=20.7%, 8=57.4%, 16=8.4%, 32=0.0%, >=64=0.0% 00:34:24.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.228 complete : 0=0.0%, 4=93.2%, 8=1.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.228 issued rwts: total=6474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.228 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.228 filename1: (groupid=0, jobs=1): err= 0: pid=3435020: Wed Jul 24 23:19:56 2024 00:34:24.228 read: IOPS=644, BW=2579KiB/s (2640kB/s)(25.2MiB/10012msec) 00:34:24.228 slat (usec): min=6, max=127, avg=28.54, stdev=20.04 00:34:24.228 clat (usec): min=14041, max=41808, avg=24602.97, stdev=1842.16 00:34:24.228 lat (usec): min=14056, max=41822, avg=24631.51, stdev=1840.74 00:34:24.228 clat percentiles (usec): 00:34:24.228 | 1.00th=[19006], 5.00th=[23200], 10.00th=[23462], 20.00th=[23987], 00:34:24.228 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24511], 60.00th=[24773], 00:34:24.228 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[26346], 00:34:24.228 | 99.00th=[30540], 99.50th=[35390], 99.90th=[39584], 99.95th=[41157], 00:34:24.228 | 99.99th=[41681] 00:34:24.228 bw ( KiB/s): min= 2432, max= 2688, per=4.19%, avg=2575.68, stdev=59.26, samples=19 00:34:24.228 iops : min= 608, max= 672, avg=643.89, stdev=14.82, samples=19 00:34:24.228 lat (msec) : 20=1.64%, 50=98.36% 00:34:24.228 cpu : usr=97.38%, sys=2.26%, ctx=21, majf=0, minf=77 00:34:24.228 IO depths : 1=5.0%, 2=10.4%, 4=23.2%, 8=53.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:34:24.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.228 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.228 issued rwts: total=6454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.228 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.228 filename1: (groupid=0, jobs=1): err= 0: pid=3435021: Wed Jul 24 23:19:56 2024 00:34:24.229 read: IOPS=655, BW=2621KiB/s (2684kB/s)(25.6MiB/10012msec) 00:34:24.229 slat (nsec): min=6034, max=85747, avg=24191.21, stdev=16388.27 00:34:24.229 clat (usec): min=5052, max=42945, avg=24229.66, stdev=3118.53 00:34:24.229 lat (usec): min=5061, max=42959, avg=24253.85, stdev=3120.01 00:34:24.229 clat percentiles (usec): 00:34:24.229 | 1.00th=[ 7242], 5.00th=[20841], 10.00th=[23200], 20.00th=[23987], 00:34:24.229 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:34:24.229 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[26346], 00:34:24.229 | 99.00th=[33817], 99.50th=[36439], 99.90th=[41157], 99.95th=[42730], 00:34:24.229 | 99.99th=[42730] 00:34:24.229 bw ( KiB/s): min= 2554, max= 2949, per=4.26%, avg=2620.58, stdev=103.48, samples=19 00:34:24.229 iops : min= 638, max= 737, avg=655.11, stdev=25.84, samples=19 00:34:24.229 lat (msec) : 10=1.34%, 20=3.16%, 50=95.50% 00:34:24.229 cpu : usr=97.99%, sys=1.58%, ctx=45, majf=0, minf=59 00:34:24.229 IO depths : 1=4.1%, 2=8.8%, 4=21.5%, 8=56.9%, 16=8.8%, 32=0.0%, >=64=0.0% 00:34:24.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.229 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.229 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.229 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.229 filename1: (groupid=0, jobs=1): err= 0: pid=3435022: Wed Jul 24 23:19:56 2024 00:34:24.229 read: IOPS=647, BW=2590KiB/s (2652kB/s)(25.3MiB/10012msec) 00:34:24.229 slat (usec): min=6, max=133, avg=31.97, stdev=21.87 00:34:24.229 clat (usec): min=6237, max=48658, avg=24460.29, stdev=2550.10 00:34:24.229 lat (usec): min=6245, max=48682, avg=24492.26, stdev=2551.10 00:34:24.229 clat percentiles (usec): 00:34:24.229 | 1.00th=[15401], 5.00th=[22152], 10.00th=[23200], 20.00th=[23725], 00:34:24.229 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:34:24.229 | 70.00th=[25035], 80.00th=[25297], 90.00th=[25560], 95.00th=[26346], 00:34:24.229 | 99.00th=[35914], 99.50th=[39584], 99.90th=[42206], 99.95th=[42730], 00:34:24.229 | 99.99th=[48497] 00:34:24.229 bw ( KiB/s): min= 2432, max= 2736, per=4.21%, avg=2587.47, stdev=73.70, samples=19 00:34:24.229 iops : min= 608, max= 684, avg=646.84, stdev=18.44, samples=19 00:34:24.229 lat (msec) : 10=0.06%, 20=3.61%, 50=96.33% 00:34:24.229 cpu : usr=97.64%, sys=1.97%, ctx=20, majf=0, minf=75 00:34:24.229 IO depths : 1=3.4%, 2=7.2%, 4=19.5%, 8=60.1%, 16=9.8%, 32=0.0%, >=64=0.0% 00:34:24.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.229 complete : 0=0.0%, 4=93.2%, 8=1.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.229 issued rwts: total=6482,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.229 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.229 filename1: (groupid=0, jobs=1): err= 0: pid=3435023: Wed Jul 24 23:19:56 2024 00:34:24.229 read: IOPS=625, BW=2502KiB/s (2562kB/s)(24.4MiB/10004msec) 00:34:24.229 slat (usec): min=6, max=126, avg=22.32, stdev=19.03 00:34:24.229 clat (usec): min=4274, max=49054, avg=25456.78, stdev=4652.66 00:34:24.229 lat (usec): min=4285, max=49066, avg=25479.10, stdev=4651.96 00:34:24.229 clat percentiles (usec): 00:34:24.229 | 1.00th=[13435], 5.00th=[18482], 10.00th=[22676], 20.00th=[23725], 00:34:24.229 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[25035], 00:34:24.229 | 70.00th=[25297], 80.00th=[26608], 90.00th=[30802], 95.00th=[34341], 00:34:24.229 | 99.00th=[41681], 99.50th=[42730], 99.90th=[44827], 99.95th=[49021], 00:34:24.229 | 99.99th=[49021] 00:34:24.229 bw ( KiB/s): min= 2332, max= 2608, per=4.05%, avg=2490.37, stdev=74.17, samples=19 00:34:24.229 iops : min= 583, max= 652, avg=622.47, stdev=18.64, samples=19 00:34:24.229 lat (msec) : 10=0.54%, 20=6.23%, 50=93.22% 00:34:24.229 cpu : usr=97.45%, sys=2.18%, ctx=23, majf=0, minf=148 00:34:24.229 IO depths : 1=0.3%, 2=0.7%, 4=6.9%, 8=77.0%, 16=15.1%, 32=0.0%, >=64=0.0% 00:34:24.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.229 complete : 0=0.0%, 4=90.4%, 8=6.6%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.229 issued rwts: total=6258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.229 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.229 filename1: (groupid=0, jobs=1): err= 0: pid=3435024: Wed Jul 24 23:19:56 2024 00:34:24.229 read: IOPS=644, BW=2579KiB/s (2641kB/s)(25.2MiB/10005msec) 00:34:24.229 slat (usec): min=4, max=123, avg=35.42, stdev=22.31 00:34:24.229 clat (usec): min=5940, max=42592, avg=24509.83, stdev=2435.76 00:34:24.229 lat (usec): min=5946, max=42611, avg=24545.25, stdev=2435.31 00:34:24.229 clat percentiles (usec): 00:34:24.229 | 1.00th=[13042], 5.00th=[23200], 10.00th=[23462], 20.00th=[23987], 00:34:24.229 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:34:24.229 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[26084], 00:34:24.229 | 99.00th=[35914], 99.50th=[38011], 99.90th=[39584], 99.95th=[42730], 00:34:24.229 | 99.99th=[42730] 00:34:24.229 bw ( KiB/s): min= 2395, max= 2688, per=4.18%, avg=2566.68, stdev=62.42, samples=19 00:34:24.229 iops : min= 598, max= 672, avg=641.58, stdev=15.71, samples=19 00:34:24.229 lat (msec) : 10=0.26%, 20=1.81%, 50=97.92% 00:34:24.229 cpu : usr=98.10%, sys=1.54%, ctx=20, majf=0, minf=49 00:34:24.229 IO depths : 1=4.6%, 2=9.1%, 4=20.1%, 8=57.2%, 16=9.0%, 32=0.0%, >=64=0.0% 00:34:24.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.229 complete : 0=0.0%, 4=93.1%, 8=2.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.229 issued rwts: total=6450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.229 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.229 filename1: (groupid=0, jobs=1): err= 0: pid=3435025: Wed Jul 24 23:19:56 2024 00:34:24.229 read: IOPS=596, BW=2387KiB/s (2444kB/s)(23.3MiB/10004msec) 00:34:24.229 slat (usec): min=4, max=136, avg=26.51, stdev=21.31 00:34:24.229 clat (usec): min=4635, max=54395, avg=26622.55, stdev=5283.16 00:34:24.229 lat (usec): min=4642, max=54408, avg=26649.05, stdev=5280.10 00:34:24.229 clat percentiles (usec): 00:34:24.229 | 1.00th=[11731], 5.00th=[20841], 10.00th=[23462], 20.00th=[23987], 00:34:24.229 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[25297], 00:34:24.229 | 70.00th=[27657], 80.00th=[30802], 90.00th=[34341], 95.00th=[36963], 00:34:24.229 | 99.00th=[40109], 99.50th=[41681], 99.90th=[43254], 99.95th=[54264], 00:34:24.229 | 99.99th=[54264] 00:34:24.229 bw ( KiB/s): min= 1883, max= 2688, per=3.85%, avg=2368.74, stdev=312.46, samples=19 00:34:24.229 iops : min= 470, max= 672, avg=592.11, stdev=78.23, samples=19 00:34:24.229 lat (msec) : 10=0.70%, 20=3.84%, 50=95.38%, 100=0.08% 00:34:24.229 cpu : usr=97.55%, sys=2.08%, ctx=23, majf=0, minf=74 00:34:24.229 IO depths : 1=1.7%, 2=3.4%, 4=13.9%, 8=68.7%, 16=12.4%, 32=0.0%, >=64=0.0% 00:34:24.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.229 complete : 0=0.0%, 4=92.2%, 8=3.7%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.229 issued rwts: total=5969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.229 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.229 filename2: (groupid=0, jobs=1): err= 0: pid=3435026: Wed Jul 24 23:19:56 2024 00:34:24.229 read: IOPS=631, BW=2524KiB/s (2585kB/s)(24.7MiB/10007msec) 00:34:24.229 slat (usec): min=6, max=113, avg=22.87, stdev=18.47 00:34:24.229 clat (usec): min=7179, max=48576, avg=25219.57, stdev=4290.41 00:34:24.229 lat (usec): min=7192, max=48596, avg=25242.44, stdev=4290.10 00:34:24.229 clat percentiles (usec): 00:34:24.229 | 1.00th=[12649], 5.00th=[19792], 10.00th=[23200], 20.00th=[23987], 00:34:24.229 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[24773], 00:34:24.229 | 70.00th=[25035], 80.00th=[25822], 90.00th=[28967], 95.00th=[33817], 00:34:24.229 | 99.00th=[42206], 99.50th=[42730], 99.90th=[45351], 99.95th=[48497], 00:34:24.229 | 99.99th=[48497] 00:34:24.229 bw ( KiB/s): min= 2360, max= 2656, per=4.09%, avg=2514.84, stdev=78.80, samples=19 00:34:24.229 iops : min= 590, max= 664, avg=628.68, stdev=19.70, samples=19 00:34:24.229 lat (msec) : 10=0.71%, 20=4.69%, 50=94.60% 00:34:24.229 cpu : usr=97.45%, sys=2.16%, ctx=27, majf=0, minf=50 00:34:24.229 IO depths : 1=0.7%, 2=1.5%, 4=9.0%, 8=74.5%, 16=14.4%, 32=0.0%, >=64=0.0% 00:34:24.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.229 complete : 0=0.0%, 4=91.0%, 8=5.7%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.229 issued rwts: total=6315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.229 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.229 filename2: (groupid=0, jobs=1): err= 0: pid=3435027: Wed Jul 24 23:19:56 2024 00:34:24.229 read: IOPS=631, BW=2526KiB/s (2587kB/s)(24.7MiB/10003msec) 00:34:24.229 slat (usec): min=6, max=132, avg=25.52, stdev=20.31 00:34:24.229 clat (usec): min=5188, max=49162, avg=25180.77, stdev=4137.12 00:34:24.229 lat (usec): min=5200, max=49175, avg=25206.29, stdev=4136.64 00:34:24.229 clat percentiles (usec): 00:34:24.229 | 1.00th=[14091], 5.00th=[19792], 10.00th=[23200], 20.00th=[23987], 00:34:24.229 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[24773], 00:34:24.229 | 70.00th=[25035], 80.00th=[25560], 90.00th=[28705], 95.00th=[33162], 00:34:24.229 | 99.00th=[42206], 99.50th=[43254], 99.90th=[49021], 99.95th=[49021], 00:34:24.229 | 99.99th=[49021] 00:34:24.229 bw ( KiB/s): min= 2276, max= 2640, per=4.09%, avg=2515.16, stdev=76.83, samples=19 00:34:24.229 iops : min= 569, max= 660, avg=628.68, stdev=19.23, samples=19 00:34:24.229 lat (msec) : 10=0.43%, 20=4.65%, 50=94.92% 00:34:24.229 cpu : usr=97.69%, sys=1.92%, ctx=37, majf=0, minf=80 00:34:24.229 IO depths : 1=0.9%, 2=1.8%, 4=7.7%, 8=74.9%, 16=14.7%, 32=0.0%, >=64=0.0% 00:34:24.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.229 complete : 0=0.0%, 4=90.5%, 8=6.7%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.229 issued rwts: total=6317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.229 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.229 filename2: (groupid=0, jobs=1): err= 0: pid=3435028: Wed Jul 24 23:19:56 2024 00:34:24.229 read: IOPS=643, BW=2572KiB/s (2634kB/s)(25.1MiB/10012msec) 00:34:24.229 slat (usec): min=6, max=123, avg=31.49, stdev=20.83 00:34:24.229 clat (usec): min=12896, max=42700, avg=24633.18, stdev=2140.28 00:34:24.229 lat (usec): min=12908, max=42720, avg=24664.67, stdev=2139.50 00:34:24.229 clat percentiles (usec): 00:34:24.229 | 1.00th=[17695], 5.00th=[22938], 10.00th=[23462], 20.00th=[23987], 00:34:24.229 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:34:24.230 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[26870], 00:34:24.230 | 99.00th=[34341], 99.50th=[37487], 99.90th=[40633], 99.95th=[41157], 00:34:24.230 | 99.99th=[42730] 00:34:24.230 bw ( KiB/s): min= 2432, max= 2688, per=4.18%, avg=2568.95, stdev=49.85, samples=19 00:34:24.230 iops : min= 608, max= 672, avg=642.21, stdev=12.49, samples=19 00:34:24.230 lat (msec) : 20=2.02%, 50=97.98% 00:34:24.230 cpu : usr=97.70%, sys=1.93%, ctx=22, majf=0, minf=68 00:34:24.230 IO depths : 1=3.4%, 2=7.5%, 4=20.9%, 8=58.3%, 16=9.9%, 32=0.0%, >=64=0.0% 00:34:24.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.230 complete : 0=0.0%, 4=93.7%, 8=0.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.230 issued rwts: total=6438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.230 filename2: (groupid=0, jobs=1): err= 0: pid=3435029: Wed Jul 24 23:19:56 2024 00:34:24.230 read: IOPS=641, BW=2564KiB/s (2626kB/s)(25.1MiB/10012msec) 00:34:24.230 slat (usec): min=6, max=131, avg=28.11, stdev=20.82 00:34:24.230 clat (usec): min=10646, max=51906, avg=24748.35, stdev=3208.62 00:34:24.230 lat (usec): min=10668, max=51930, avg=24776.46, stdev=3208.78 00:34:24.230 clat percentiles (usec): 00:34:24.230 | 1.00th=[14615], 5.00th=[20841], 10.00th=[23200], 20.00th=[23725], 00:34:24.230 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:34:24.230 | 70.00th=[25035], 80.00th=[25297], 90.00th=[26346], 95.00th=[29230], 00:34:24.230 | 99.00th=[38536], 99.50th=[41157], 99.90th=[49021], 99.95th=[51643], 00:34:24.230 | 99.99th=[52167] 00:34:24.230 bw ( KiB/s): min= 2432, max= 2736, per=4.16%, avg=2557.74, stdev=75.95, samples=19 00:34:24.230 iops : min= 608, max= 684, avg=639.37, stdev=19.02, samples=19 00:34:24.230 lat (msec) : 20=4.07%, 50=95.86%, 100=0.08% 00:34:24.230 cpu : usr=97.63%, sys=2.02%, ctx=24, majf=0, minf=76 00:34:24.230 IO depths : 1=2.2%, 2=4.8%, 4=14.3%, 8=67.0%, 16=11.7%, 32=0.0%, >=64=0.0% 00:34:24.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.230 complete : 0=0.0%, 4=91.9%, 8=3.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.230 issued rwts: total=6418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.230 filename2: (groupid=0, jobs=1): err= 0: pid=3435030: Wed Jul 24 23:19:56 2024 00:34:24.230 read: IOPS=645, BW=2583KiB/s (2644kB/s)(25.2MiB/10012msec) 00:34:24.230 slat (usec): min=6, max=127, avg=30.83, stdev=20.94 00:34:24.230 clat (usec): min=11605, max=43850, avg=24545.90, stdev=2319.08 00:34:24.230 lat (usec): min=11613, max=43863, avg=24576.73, stdev=2319.06 00:34:24.230 clat percentiles (usec): 00:34:24.230 | 1.00th=[16581], 5.00th=[22676], 10.00th=[23462], 20.00th=[23725], 00:34:24.230 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:34:24.230 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[26608], 00:34:24.230 | 99.00th=[33817], 99.50th=[38536], 99.90th=[42730], 99.95th=[43779], 00:34:24.230 | 99.99th=[43779] 00:34:24.230 bw ( KiB/s): min= 2432, max= 2688, per=4.20%, avg=2579.89, stdev=60.90, samples=19 00:34:24.230 iops : min= 608, max= 672, avg=644.95, stdev=15.24, samples=19 00:34:24.230 lat (msec) : 20=2.88%, 50=97.12% 00:34:24.230 cpu : usr=97.53%, sys=2.06%, ctx=21, majf=0, minf=70 00:34:24.230 IO depths : 1=4.2%, 2=8.8%, 4=21.5%, 8=56.8%, 16=8.6%, 32=0.0%, >=64=0.0% 00:34:24.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.230 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.230 issued rwts: total=6464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.230 filename2: (groupid=0, jobs=1): err= 0: pid=3435031: Wed Jul 24 23:19:56 2024 00:34:24.230 read: IOPS=651, BW=2606KiB/s (2668kB/s)(25.5MiB/10021msec) 00:34:24.230 slat (usec): min=3, max=604, avg=33.82, stdev=17.40 00:34:24.230 clat (usec): min=4013, max=39484, avg=24274.36, stdev=2774.57 00:34:24.230 lat (usec): min=4021, max=39491, avg=24308.18, stdev=2776.02 00:34:24.230 clat percentiles (usec): 00:34:24.230 | 1.00th=[ 7177], 5.00th=[22676], 10.00th=[23462], 20.00th=[23987], 00:34:24.230 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:34:24.230 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[26084], 00:34:24.230 | 99.00th=[32113], 99.50th=[33817], 99.90th=[36963], 99.95th=[38011], 00:34:24.230 | 99.99th=[39584] 00:34:24.230 bw ( KiB/s): min= 2554, max= 3072, per=4.23%, avg=2600.11, stdev=120.37, samples=19 00:34:24.230 iops : min= 638, max= 768, avg=650.00, stdev=30.10, samples=19 00:34:24.230 lat (msec) : 10=1.23%, 20=2.27%, 50=96.51% 00:34:24.230 cpu : usr=97.54%, sys=1.88%, ctx=138, majf=0, minf=122 00:34:24.230 IO depths : 1=5.0%, 2=10.4%, 4=22.6%, 8=54.0%, 16=8.0%, 32=0.0%, >=64=0.0% 00:34:24.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.230 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.230 issued rwts: total=6528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.230 filename2: (groupid=0, jobs=1): err= 0: pid=3435032: Wed Jul 24 23:19:56 2024 00:34:24.230 read: IOPS=667, BW=2668KiB/s (2732kB/s)(26.1MiB/10001msec) 00:34:24.230 slat (usec): min=3, max=121, avg=19.35, stdev=15.91 00:34:24.230 clat (usec): min=3094, max=44562, avg=23842.03, stdev=3577.48 00:34:24.230 lat (usec): min=3102, max=44574, avg=23861.38, stdev=3579.34 00:34:24.230 clat percentiles (usec): 00:34:24.230 | 1.00th=[ 5932], 5.00th=[16319], 10.00th=[22676], 20.00th=[23725], 00:34:24.230 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:34:24.230 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[26084], 00:34:24.230 | 99.00th=[30016], 99.50th=[33817], 99.90th=[44303], 99.95th=[44303], 00:34:24.230 | 99.99th=[44303] 00:34:24.230 bw ( KiB/s): min= 2554, max= 3184, per=4.34%, avg=2666.74, stdev=165.57, samples=19 00:34:24.230 iops : min= 638, max= 796, avg=666.63, stdev=41.43, samples=19 00:34:24.230 lat (msec) : 4=0.09%, 10=1.99%, 20=4.53%, 50=93.39% 00:34:24.230 cpu : usr=97.22%, sys=2.30%, ctx=40, majf=0, minf=83 00:34:24.230 IO depths : 1=4.8%, 2=9.7%, 4=20.9%, 8=56.5%, 16=8.1%, 32=0.0%, >=64=0.0% 00:34:24.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.230 complete : 0=0.0%, 4=93.3%, 8=1.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.230 issued rwts: total=6671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.230 filename2: (groupid=0, jobs=1): err= 0: pid=3435033: Wed Jul 24 23:19:56 2024 00:34:24.230 read: IOPS=627, BW=2508KiB/s (2568kB/s)(24.5MiB/10004msec) 00:34:24.230 slat (usec): min=5, max=124, avg=22.07, stdev=18.67 00:34:24.230 clat (usec): min=4397, max=45283, avg=25386.56, stdev=4605.91 00:34:24.230 lat (usec): min=4404, max=45295, avg=25408.64, stdev=4605.62 00:34:24.230 clat percentiles (usec): 00:34:24.230 | 1.00th=[11469], 5.00th=[18744], 10.00th=[23200], 20.00th=[23987], 00:34:24.230 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[25035], 00:34:24.230 | 70.00th=[25297], 80.00th=[26084], 90.00th=[30278], 95.00th=[35390], 00:34:24.230 | 99.00th=[41157], 99.50th=[42730], 99.90th=[43779], 99.95th=[45351], 00:34:24.230 | 99.99th=[45351] 00:34:24.230 bw ( KiB/s): min= 2188, max= 2656, per=4.09%, avg=2515.16, stdev=101.78, samples=19 00:34:24.230 iops : min= 547, max= 664, avg=628.68, stdev=25.53, samples=19 00:34:24.230 lat (msec) : 10=0.65%, 20=5.13%, 50=94.21% 00:34:24.230 cpu : usr=97.55%, sys=2.09%, ctx=19, majf=0, minf=90 00:34:24.230 IO depths : 1=0.6%, 2=1.3%, 4=7.7%, 8=75.4%, 16=15.0%, 32=0.0%, >=64=0.0% 00:34:24.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.230 complete : 0=0.0%, 4=90.6%, 8=6.7%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.230 issued rwts: total=6273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.230 00:34:24.230 Run status group 0 (all jobs): 00:34:24.230 READ: bw=60.0MiB/s (62.9MB/s), 2387KiB/s-2668KiB/s (2444kB/s-2732kB/s), io=601MiB (631MB), run=10001-10022msec 00:34:24.490 23:19:56 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:24.490 23:19:56 -- target/dif.sh@43 -- # local sub 00:34:24.491 23:19:56 -- target/dif.sh@45 -- # for sub in "$@" 00:34:24.491 23:19:56 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:24.491 23:19:56 -- target/dif.sh@36 -- # local sub_id=0 00:34:24.491 23:19:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:24.491 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:24.491 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:34:24.491 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:24.491 23:19:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:24.491 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:24.491 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:34:24.491 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:24.491 23:19:56 -- target/dif.sh@45 -- # for sub in "$@" 00:34:24.491 23:19:56 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:24.491 23:19:56 -- target/dif.sh@36 -- # local sub_id=1 00:34:24.491 23:19:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:24.491 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:24.491 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:34:24.491 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:24.491 23:19:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:24.491 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:24.491 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:34:24.491 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:24.491 23:19:56 -- target/dif.sh@45 -- # for sub in "$@" 00:34:24.491 23:19:56 -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:24.491 23:19:56 -- target/dif.sh@36 -- # local sub_id=2 00:34:24.491 23:19:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:24.491 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:24.491 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:34:24.491 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:24.491 23:19:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:24.491 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:24.491 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:34:24.491 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:24.491 23:19:56 -- target/dif.sh@115 -- # NULL_DIF=1 00:34:24.491 23:19:56 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:24.491 23:19:56 -- target/dif.sh@115 -- # numjobs=2 00:34:24.491 23:19:56 -- target/dif.sh@115 -- # iodepth=8 00:34:24.491 23:19:56 -- target/dif.sh@115 -- # runtime=5 00:34:24.491 23:19:56 -- target/dif.sh@115 -- # files=1 00:34:24.491 23:19:56 -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:24.491 23:19:56 -- target/dif.sh@28 -- # local sub 00:34:24.491 23:19:56 -- target/dif.sh@30 -- # for sub in "$@" 00:34:24.491 23:19:56 -- target/dif.sh@31 -- # create_subsystem 0 00:34:24.491 23:19:56 -- target/dif.sh@18 -- # local sub_id=0 00:34:24.491 23:19:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:24.491 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:24.491 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:34:24.491 bdev_null0 00:34:24.491 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:24.491 23:19:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:24.491 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:24.491 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:34:24.491 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:24.491 23:19:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:24.491 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:24.491 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:34:24.491 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:24.491 23:19:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:24.491 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:24.491 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:34:24.491 [2024-07-24 23:19:56.846940] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:24.491 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:24.491 23:19:56 -- target/dif.sh@30 -- # for sub in "$@" 00:34:24.491 23:19:56 -- target/dif.sh@31 -- # create_subsystem 1 00:34:24.491 23:19:56 -- target/dif.sh@18 -- # local sub_id=1 00:34:24.491 23:19:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:24.491 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:24.491 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:34:24.491 bdev_null1 00:34:24.491 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:24.491 23:19:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:24.491 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:24.491 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:34:24.491 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:24.491 23:19:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:24.491 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:24.491 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:34:24.491 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:24.491 23:19:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:24.491 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:24.491 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:34:24.491 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:24.491 23:19:56 -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:24.491 23:19:56 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:24.491 23:19:56 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:24.491 23:19:56 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:24.491 23:19:56 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:24.491 23:19:56 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:24.491 23:19:56 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:24.491 23:19:56 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:24.491 23:19:56 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:24.491 23:19:56 -- common/autotest_common.sh@1320 -- # shift 00:34:24.491 23:19:56 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:24.491 23:19:56 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:24.491 23:19:56 -- nvmf/common.sh@520 -- # config=() 00:34:24.491 23:19:56 -- target/dif.sh@82 -- # gen_fio_conf 00:34:24.491 23:19:56 -- nvmf/common.sh@520 -- # local subsystem config 00:34:24.491 23:19:56 -- target/dif.sh@54 -- # local file 00:34:24.491 23:19:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:24.491 23:19:56 -- target/dif.sh@56 -- # cat 00:34:24.491 23:19:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:24.491 { 00:34:24.491 "params": { 00:34:24.491 "name": "Nvme$subsystem", 00:34:24.491 "trtype": "$TEST_TRANSPORT", 00:34:24.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:24.491 "adrfam": "ipv4", 00:34:24.491 "trsvcid": "$NVMF_PORT", 00:34:24.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:24.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:24.491 "hdgst": ${hdgst:-false}, 00:34:24.491 "ddgst": ${ddgst:-false} 00:34:24.491 }, 00:34:24.491 "method": "bdev_nvme_attach_controller" 00:34:24.491 } 00:34:24.491 EOF 00:34:24.491 )") 00:34:24.491 23:19:56 -- nvmf/common.sh@542 -- # cat 00:34:24.491 23:19:56 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:24.491 23:19:56 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:24.491 23:19:56 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:24.491 23:19:56 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:24.491 23:19:56 -- target/dif.sh@72 -- # (( file <= files )) 00:34:24.491 23:19:56 -- target/dif.sh@73 -- # cat 00:34:24.491 23:19:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:24.491 23:19:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:24.491 { 00:34:24.491 "params": { 00:34:24.491 "name": "Nvme$subsystem", 00:34:24.491 "trtype": "$TEST_TRANSPORT", 00:34:24.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:24.491 "adrfam": "ipv4", 00:34:24.491 "trsvcid": "$NVMF_PORT", 00:34:24.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:24.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:24.491 "hdgst": ${hdgst:-false}, 00:34:24.491 "ddgst": ${ddgst:-false} 00:34:24.491 }, 00:34:24.491 "method": "bdev_nvme_attach_controller" 00:34:24.491 } 00:34:24.491 EOF 00:34:24.491 )") 00:34:24.491 23:19:56 -- target/dif.sh@72 -- # (( file++ )) 00:34:24.491 23:19:56 -- target/dif.sh@72 -- # (( file <= files )) 00:34:24.491 23:19:56 -- nvmf/common.sh@542 -- # cat 00:34:24.491 23:19:56 -- nvmf/common.sh@544 -- # jq . 00:34:24.491 23:19:56 -- nvmf/common.sh@545 -- # IFS=, 00:34:24.491 23:19:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:24.491 "params": { 00:34:24.491 "name": "Nvme0", 00:34:24.491 "trtype": "tcp", 00:34:24.491 "traddr": "10.0.0.2", 00:34:24.491 "adrfam": "ipv4", 00:34:24.491 "trsvcid": "4420", 00:34:24.491 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:24.491 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:24.491 "hdgst": false, 00:34:24.491 "ddgst": false 00:34:24.491 }, 00:34:24.491 "method": "bdev_nvme_attach_controller" 00:34:24.491 },{ 00:34:24.491 "params": { 00:34:24.491 "name": "Nvme1", 00:34:24.491 "trtype": "tcp", 00:34:24.491 "traddr": "10.0.0.2", 00:34:24.491 "adrfam": "ipv4", 00:34:24.491 "trsvcid": "4420", 00:34:24.491 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:24.492 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:24.492 "hdgst": false, 00:34:24.492 "ddgst": false 00:34:24.492 }, 00:34:24.492 "method": "bdev_nvme_attach_controller" 00:34:24.492 }' 00:34:24.769 23:19:56 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:24.769 23:19:56 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:24.769 23:19:56 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:24.769 23:19:56 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:24.769 23:19:56 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:24.769 23:19:56 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:24.769 23:19:56 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:24.769 23:19:56 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:24.769 23:19:56 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:24.769 23:19:56 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:25.032 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:25.032 ... 00:34:25.032 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:25.032 ... 00:34:25.032 fio-3.35 00:34:25.032 Starting 4 threads 00:34:25.032 EAL: No free 2048 kB hugepages reported on node 1 00:34:25.596 [2024-07-24 23:19:57.987463] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:25.596 [2024-07-24 23:19:57.987518] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:30.892 00:34:30.892 filename0: (groupid=0, jobs=1): err= 0: pid=3437033: Wed Jul 24 23:20:03 2024 00:34:30.892 read: IOPS=2786, BW=21.8MiB/s (22.8MB/s)(109MiB/5001msec) 00:34:30.892 slat (usec): min=5, max=178, avg= 8.61, stdev= 3.09 00:34:30.892 clat (usec): min=1277, max=44564, avg=2849.30, stdev=1072.84 00:34:30.892 lat (usec): min=1284, max=44583, avg=2857.91, stdev=1072.85 00:34:30.892 clat percentiles (usec): 00:34:30.892 | 1.00th=[ 1991], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2540], 00:34:30.892 | 30.00th=[ 2638], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:34:30.892 | 70.00th=[ 2966], 80.00th=[ 3097], 90.00th=[ 3326], 95.00th=[ 3556], 00:34:30.892 | 99.00th=[ 3982], 99.50th=[ 4113], 99.90th=[ 4621], 99.95th=[44303], 00:34:30.892 | 99.99th=[44303] 00:34:30.892 bw ( KiB/s): min=20857, max=22976, per=24.75%, avg=22358.33, stdev=621.74, samples=9 00:34:30.892 iops : min= 2607, max= 2872, avg=2794.78, stdev=77.76, samples=9 00:34:30.892 lat (msec) : 2=1.01%, 4=98.07%, 10=0.86%, 50=0.06% 00:34:30.892 cpu : usr=93.64%, sys=6.08%, ctx=7, majf=0, minf=0 00:34:30.892 IO depths : 1=0.2%, 2=1.2%, 4=66.9%, 8=31.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:30.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.892 complete : 0=0.0%, 4=95.7%, 8=4.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.892 issued rwts: total=13935,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.892 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:30.892 filename0: (groupid=0, jobs=1): err= 0: pid=3437034: Wed Jul 24 23:20:03 2024 00:34:30.892 read: IOPS=2847, BW=22.2MiB/s (23.3MB/s)(111MiB/5001msec) 00:34:30.892 slat (nsec): min=5758, max=29699, avg=8320.62, stdev=2714.31 00:34:30.892 clat (usec): min=681, max=4775, avg=2787.12, stdev=456.68 00:34:30.892 lat (usec): min=688, max=4781, avg=2795.45, stdev=456.51 00:34:30.892 clat percentiles (usec): 00:34:30.892 | 1.00th=[ 1500], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2474], 00:34:30.892 | 30.00th=[ 2573], 40.00th=[ 2704], 50.00th=[ 2769], 60.00th=[ 2802], 00:34:30.892 | 70.00th=[ 2868], 80.00th=[ 3097], 90.00th=[ 3392], 95.00th=[ 3621], 00:34:30.892 | 99.00th=[ 4015], 99.50th=[ 4080], 99.90th=[ 4293], 99.95th=[ 4359], 00:34:30.892 | 99.99th=[ 4555] 00:34:30.892 bw ( KiB/s): min=21968, max=24448, per=25.10%, avg=22677.33, stdev=721.86, samples=9 00:34:30.892 iops : min= 2746, max= 3056, avg=2834.67, stdev=90.23, samples=9 00:34:30.892 lat (usec) : 750=0.01% 00:34:30.892 lat (msec) : 2=2.47%, 4=96.56%, 10=0.96% 00:34:30.892 cpu : usr=93.40%, sys=6.26%, ctx=6, majf=0, minf=0 00:34:30.892 IO depths : 1=0.1%, 2=1.5%, 4=67.3%, 8=31.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:30.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.892 complete : 0=0.0%, 4=95.2%, 8=4.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.892 issued rwts: total=14242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.892 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:30.892 filename1: (groupid=0, jobs=1): err= 0: pid=3437035: Wed Jul 24 23:20:03 2024 00:34:30.892 read: IOPS=2878, BW=22.5MiB/s (23.6MB/s)(113MiB/5002msec) 00:34:30.892 slat (nsec): min=5737, max=27123, avg=8659.30, stdev=2751.32 00:34:30.892 clat (usec): min=1288, max=47271, avg=2755.27, stdev=1120.83 00:34:30.892 lat (usec): min=1294, max=47291, avg=2763.93, stdev=1120.78 00:34:30.892 clat percentiles (usec): 00:34:30.892 | 1.00th=[ 1991], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2409], 00:34:30.892 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2737], 60.00th=[ 2769], 00:34:30.892 | 70.00th=[ 2835], 80.00th=[ 2999], 90.00th=[ 3261], 95.00th=[ 3523], 00:34:30.892 | 99.00th=[ 3916], 99.50th=[ 4015], 99.90th=[ 4293], 99.95th=[47449], 00:34:30.892 | 99.99th=[47449] 00:34:30.892 bw ( KiB/s): min=20704, max=24880, per=25.41%, avg=22961.78, stdev=1129.40, samples=9 00:34:30.892 iops : min= 2588, max= 3110, avg=2870.22, stdev=141.18, samples=9 00:34:30.892 lat (msec) : 2=1.11%, 4=98.33%, 10=0.51%, 50=0.06% 00:34:30.892 cpu : usr=93.84%, sys=5.86%, ctx=9, majf=0, minf=0 00:34:30.892 IO depths : 1=0.2%, 2=1.9%, 4=68.0%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:30.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.892 complete : 0=0.0%, 4=94.3%, 8=5.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.892 issued rwts: total=14400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.892 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:30.892 filename1: (groupid=0, jobs=1): err= 0: pid=3437036: Wed Jul 24 23:20:03 2024 00:34:30.892 read: IOPS=2781, BW=21.7MiB/s (22.8MB/s)(109MiB/5002msec) 00:34:30.892 slat (nsec): min=3721, max=25050, avg=8574.99, stdev=2656.76 00:34:30.892 clat (usec): min=1387, max=46645, avg=2853.92, stdev=1126.24 00:34:30.892 lat (usec): min=1393, max=46664, avg=2862.49, stdev=1126.18 00:34:30.892 clat percentiles (usec): 00:34:30.892 | 1.00th=[ 1942], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2540], 00:34:30.892 | 30.00th=[ 2638], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:34:30.892 | 70.00th=[ 2966], 80.00th=[ 3097], 90.00th=[ 3326], 95.00th=[ 3589], 00:34:30.892 | 99.00th=[ 4015], 99.50th=[ 4178], 99.90th=[ 4555], 99.95th=[46400], 00:34:30.892 | 99.99th=[46400] 00:34:30.892 bw ( KiB/s): min=20592, max=23056, per=24.71%, avg=22323.56, stdev=737.16, samples=9 00:34:30.892 iops : min= 2574, max= 2882, avg=2790.44, stdev=92.15, samples=9 00:34:30.892 lat (msec) : 2=1.55%, 4=97.16%, 10=1.24%, 50=0.06% 00:34:30.892 cpu : usr=94.52%, sys=5.18%, ctx=7, majf=0, minf=0 00:34:30.892 IO depths : 1=0.1%, 2=1.1%, 4=67.1%, 8=31.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:30.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.892 complete : 0=0.0%, 4=95.7%, 8=4.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.892 issued rwts: total=13915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.892 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:30.892 00:34:30.892 Run status group 0 (all jobs): 00:34:30.892 READ: bw=88.2MiB/s (92.5MB/s), 21.7MiB/s-22.5MiB/s (22.8MB/s-23.6MB/s), io=441MiB (463MB), run=5001-5002msec 00:34:31.149 23:20:03 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:31.149 23:20:03 -- target/dif.sh@43 -- # local sub 00:34:31.149 23:20:03 -- target/dif.sh@45 -- # for sub in "$@" 00:34:31.149 23:20:03 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:31.149 23:20:03 -- target/dif.sh@36 -- # local sub_id=0 00:34:31.149 23:20:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:31.149 23:20:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.149 23:20:03 -- common/autotest_common.sh@10 -- # set +x 00:34:31.149 23:20:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.149 23:20:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:31.149 23:20:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.149 23:20:03 -- common/autotest_common.sh@10 -- # set +x 00:34:31.149 23:20:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.149 23:20:03 -- target/dif.sh@45 -- # for sub in "$@" 00:34:31.149 23:20:03 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:31.149 23:20:03 -- target/dif.sh@36 -- # local sub_id=1 00:34:31.149 23:20:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:31.149 23:20:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.149 23:20:03 -- common/autotest_common.sh@10 -- # set +x 00:34:31.149 23:20:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.149 23:20:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:31.149 23:20:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.149 23:20:03 -- common/autotest_common.sh@10 -- # set +x 00:34:31.149 23:20:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.149 00:34:31.149 real 0m24.536s 00:34:31.149 user 4m54.475s 00:34:31.149 sys 0m8.398s 00:34:31.149 23:20:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:31.149 23:20:03 -- common/autotest_common.sh@10 -- # set +x 00:34:31.149 ************************************ 00:34:31.149 END TEST fio_dif_rand_params 00:34:31.149 ************************************ 00:34:31.149 23:20:03 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:31.149 23:20:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:31.149 23:20:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:31.149 23:20:03 -- common/autotest_common.sh@10 -- # set +x 00:34:31.149 ************************************ 00:34:31.149 START TEST fio_dif_digest 00:34:31.149 ************************************ 00:34:31.149 23:20:03 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:34:31.149 23:20:03 -- target/dif.sh@123 -- # local NULL_DIF 00:34:31.149 23:20:03 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:31.149 23:20:03 -- target/dif.sh@125 -- # local hdgst ddgst 00:34:31.149 23:20:03 -- target/dif.sh@127 -- # NULL_DIF=3 00:34:31.149 23:20:03 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:31.149 23:20:03 -- target/dif.sh@127 -- # numjobs=3 00:34:31.149 23:20:03 -- target/dif.sh@127 -- # iodepth=3 00:34:31.149 23:20:03 -- target/dif.sh@127 -- # runtime=10 00:34:31.149 23:20:03 -- target/dif.sh@128 -- # hdgst=true 00:34:31.149 23:20:03 -- target/dif.sh@128 -- # ddgst=true 00:34:31.149 23:20:03 -- target/dif.sh@130 -- # create_subsystems 0 00:34:31.149 23:20:03 -- target/dif.sh@28 -- # local sub 00:34:31.149 23:20:03 -- target/dif.sh@30 -- # for sub in "$@" 00:34:31.149 23:20:03 -- target/dif.sh@31 -- # create_subsystem 0 00:34:31.149 23:20:03 -- target/dif.sh@18 -- # local sub_id=0 00:34:31.149 23:20:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:31.149 23:20:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.149 23:20:03 -- common/autotest_common.sh@10 -- # set +x 00:34:31.149 bdev_null0 00:34:31.149 23:20:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.149 23:20:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:31.149 23:20:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.149 23:20:03 -- common/autotest_common.sh@10 -- # set +x 00:34:31.149 23:20:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.149 23:20:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:31.149 23:20:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.149 23:20:03 -- common/autotest_common.sh@10 -- # set +x 00:34:31.149 23:20:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.149 23:20:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:31.149 23:20:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.149 23:20:03 -- common/autotest_common.sh@10 -- # set +x 00:34:31.149 [2024-07-24 23:20:03.466636] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:31.149 23:20:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.149 23:20:03 -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:31.149 23:20:03 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:31.149 23:20:03 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:31.149 23:20:03 -- nvmf/common.sh@520 -- # config=() 00:34:31.149 23:20:03 -- nvmf/common.sh@520 -- # local subsystem config 00:34:31.149 23:20:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:31.149 23:20:03 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.149 23:20:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:31.149 { 00:34:31.149 "params": { 00:34:31.149 "name": "Nvme$subsystem", 00:34:31.149 "trtype": "$TEST_TRANSPORT", 00:34:31.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:31.149 "adrfam": "ipv4", 00:34:31.149 "trsvcid": "$NVMF_PORT", 00:34:31.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:31.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:31.149 "hdgst": ${hdgst:-false}, 00:34:31.149 "ddgst": ${ddgst:-false} 00:34:31.149 }, 00:34:31.149 "method": "bdev_nvme_attach_controller" 00:34:31.149 } 00:34:31.149 EOF 00:34:31.149 )") 00:34:31.149 23:20:03 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.149 23:20:03 -- target/dif.sh@82 -- # gen_fio_conf 00:34:31.149 23:20:03 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:31.149 23:20:03 -- target/dif.sh@54 -- # local file 00:34:31.149 23:20:03 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:31.149 23:20:03 -- target/dif.sh@56 -- # cat 00:34:31.149 23:20:03 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:31.149 23:20:03 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.149 23:20:03 -- common/autotest_common.sh@1320 -- # shift 00:34:31.149 23:20:03 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:31.149 23:20:03 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:31.149 23:20:03 -- nvmf/common.sh@542 -- # cat 00:34:31.149 23:20:03 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:31.149 23:20:03 -- target/dif.sh@72 -- # (( file <= files )) 00:34:31.149 23:20:03 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.149 23:20:03 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:31.149 23:20:03 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:31.150 23:20:03 -- nvmf/common.sh@544 -- # jq . 00:34:31.150 23:20:03 -- nvmf/common.sh@545 -- # IFS=, 00:34:31.150 23:20:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:31.150 "params": { 00:34:31.150 "name": "Nvme0", 00:34:31.150 "trtype": "tcp", 00:34:31.150 "traddr": "10.0.0.2", 00:34:31.150 "adrfam": "ipv4", 00:34:31.150 "trsvcid": "4420", 00:34:31.150 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:31.150 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:31.150 "hdgst": true, 00:34:31.150 "ddgst": true 00:34:31.150 }, 00:34:31.150 "method": "bdev_nvme_attach_controller" 00:34:31.150 }' 00:34:31.150 23:20:03 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:31.150 23:20:03 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:31.150 23:20:03 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:31.150 23:20:03 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.150 23:20:03 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:31.150 23:20:03 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:31.150 23:20:03 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:31.150 23:20:03 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:31.150 23:20:03 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:31.150 23:20:03 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.719 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:31.719 ... 00:34:31.719 fio-3.35 00:34:31.719 Starting 3 threads 00:34:31.719 EAL: No free 2048 kB hugepages reported on node 1 00:34:31.976 [2024-07-24 23:20:04.294930] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:31.976 [2024-07-24 23:20:04.294971] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:44.164 00:34:44.164 filename0: (groupid=0, jobs=1): err= 0: pid=3438258: Wed Jul 24 23:20:14 2024 00:34:44.164 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(364MiB/10007msec) 00:34:44.164 slat (nsec): min=6072, max=57287, avg=14352.12, stdev=5522.24 00:34:44.164 clat (usec): min=6922, max=13081, avg=10302.22, stdev=760.40 00:34:44.164 lat (usec): min=6930, max=13095, avg=10316.57, stdev=760.47 00:34:44.164 clat percentiles (usec): 00:34:44.164 | 1.00th=[ 8225], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:34:44.164 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:34:44.164 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11600], 00:34:44.164 | 99.00th=[12125], 99.50th=[12256], 99.90th=[12780], 99.95th=[12911], 00:34:44.164 | 99.99th=[13042] 00:34:44.164 bw ( KiB/s): min=36096, max=38400, per=34.29%, avg=37214.32, stdev=604.67, samples=19 00:34:44.164 iops : min= 282, max= 300, avg=290.74, stdev= 4.72, samples=19 00:34:44.164 lat (msec) : 10=32.79%, 20=67.21% 00:34:44.164 cpu : usr=93.48%, sys=6.18%, ctx=18, majf=0, minf=137 00:34:44.164 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:44.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.164 issued rwts: total=2909,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.164 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:44.164 filename0: (groupid=0, jobs=1): err= 0: pid=3438259: Wed Jul 24 23:20:14 2024 00:34:44.164 read: IOPS=274, BW=34.3MiB/s (35.9MB/s)(344MiB/10046msec) 00:34:44.164 slat (nsec): min=6031, max=34112, avg=12238.26, stdev=3869.02 00:34:44.164 clat (usec): min=8376, max=53725, avg=10914.23, stdev=2329.03 00:34:44.164 lat (usec): min=8387, max=53736, avg=10926.47, stdev=2328.89 00:34:44.164 clat percentiles (usec): 00:34:44.164 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10159], 00:34:44.164 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:34:44.164 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11731], 95.00th=[12125], 00:34:44.164 | 99.00th=[12911], 99.50th=[13304], 99.90th=[52691], 99.95th=[52691], 00:34:44.164 | 99.99th=[53740] 00:34:44.164 bw ( KiB/s): min=32512, max=36352, per=32.44%, avg=35210.26, stdev=1041.47, samples=19 00:34:44.164 iops : min= 254, max= 284, avg=275.05, stdev= 8.17, samples=19 00:34:44.164 lat (msec) : 10=14.27%, 20=85.44%, 50=0.07%, 100=0.22% 00:34:44.164 cpu : usr=92.75%, sys=6.92%, ctx=20, majf=0, minf=163 00:34:44.164 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:44.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.164 issued rwts: total=2754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.164 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:44.164 filename0: (groupid=0, jobs=1): err= 0: pid=3438260: Wed Jul 24 23:20:14 2024 00:34:44.164 read: IOPS=285, BW=35.7MiB/s (37.4MB/s)(357MiB/10006msec) 00:34:44.164 slat (usec): min=6, max=115, avg=12.33, stdev= 4.38 00:34:44.164 clat (usec): min=6721, max=13361, avg=10498.65, stdev=803.05 00:34:44.164 lat (usec): min=6733, max=13373, avg=10510.98, stdev=803.08 00:34:44.164 clat percentiles (usec): 00:34:44.164 | 1.00th=[ 8291], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[ 9896], 00:34:44.164 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:34:44.164 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11469], 95.00th=[11863], 00:34:44.164 | 99.00th=[12387], 99.50th=[12780], 99.90th=[13173], 99.95th=[13173], 00:34:44.164 | 99.99th=[13304] 00:34:44.164 bw ( KiB/s): min=35328, max=37888, per=33.66%, avg=36527.16, stdev=744.18, samples=19 00:34:44.164 iops : min= 276, max= 296, avg=285.37, stdev= 5.81, samples=19 00:34:44.164 lat (msec) : 10=24.45%, 20=75.55% 00:34:44.164 cpu : usr=92.45%, sys=7.22%, ctx=21, majf=0, minf=141 00:34:44.164 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:44.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.164 issued rwts: total=2855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.164 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:44.164 00:34:44.164 Run status group 0 (all jobs): 00:34:44.164 READ: bw=106MiB/s (111MB/s), 34.3MiB/s-36.3MiB/s (35.9MB/s-38.1MB/s), io=1065MiB (1116MB), run=10006-10046msec 00:34:44.164 23:20:14 -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:44.164 23:20:14 -- target/dif.sh@43 -- # local sub 00:34:44.164 23:20:14 -- target/dif.sh@45 -- # for sub in "$@" 00:34:44.164 23:20:14 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:44.164 23:20:14 -- target/dif.sh@36 -- # local sub_id=0 00:34:44.164 23:20:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:44.164 23:20:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:44.164 23:20:14 -- common/autotest_common.sh@10 -- # set +x 00:34:44.164 23:20:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:44.164 23:20:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:44.164 23:20:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:44.164 23:20:14 -- common/autotest_common.sh@10 -- # set +x 00:34:44.164 23:20:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:44.164 00:34:44.164 real 0m11.205s 00:34:44.164 user 0m37.275s 00:34:44.164 sys 0m2.466s 00:34:44.164 23:20:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:44.164 23:20:14 -- common/autotest_common.sh@10 -- # set +x 00:34:44.164 ************************************ 00:34:44.164 END TEST fio_dif_digest 00:34:44.164 ************************************ 00:34:44.164 23:20:14 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:44.164 23:20:14 -- target/dif.sh@147 -- # nvmftestfini 00:34:44.164 23:20:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:44.164 23:20:14 -- nvmf/common.sh@116 -- # sync 00:34:44.164 23:20:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:44.164 23:20:14 -- nvmf/common.sh@119 -- # set +e 00:34:44.164 23:20:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:44.165 23:20:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:44.165 rmmod nvme_tcp 00:34:44.165 rmmod nvme_fabrics 00:34:44.165 rmmod nvme_keyring 00:34:44.165 23:20:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:44.165 23:20:14 -- nvmf/common.sh@123 -- # set -e 00:34:44.165 23:20:14 -- nvmf/common.sh@124 -- # return 0 00:34:44.165 23:20:14 -- nvmf/common.sh@477 -- # '[' -n 3429331 ']' 00:34:44.165 23:20:14 -- nvmf/common.sh@478 -- # killprocess 3429331 00:34:44.165 23:20:14 -- common/autotest_common.sh@926 -- # '[' -z 3429331 ']' 00:34:44.165 23:20:14 -- common/autotest_common.sh@930 -- # kill -0 3429331 00:34:44.165 23:20:14 -- common/autotest_common.sh@931 -- # uname 00:34:44.165 23:20:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:44.165 23:20:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3429331 00:34:44.165 23:20:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:44.165 23:20:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:44.165 23:20:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3429331' 00:34:44.165 killing process with pid 3429331 00:34:44.165 23:20:14 -- common/autotest_common.sh@945 -- # kill 3429331 00:34:44.165 23:20:14 -- common/autotest_common.sh@950 -- # wait 3429331 00:34:44.165 23:20:14 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:34:44.165 23:20:14 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:45.542 Waiting for block devices as requested 00:34:45.800 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:45.800 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:45.800 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:45.800 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:46.059 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:46.059 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:46.059 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:46.316 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:46.316 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:46.316 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:46.316 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:46.574 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:46.574 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:46.574 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:46.831 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:46.831 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:46.831 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:34:47.090 23:20:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:47.090 23:20:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:47.090 23:20:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:47.090 23:20:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:47.090 23:20:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.090 23:20:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:47.090 23:20:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.621 23:20:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:49.621 00:34:49.621 real 1m16.304s 00:34:49.621 user 7m15.957s 00:34:49.621 sys 0m28.575s 00:34:49.621 23:20:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:49.621 23:20:21 -- common/autotest_common.sh@10 -- # set +x 00:34:49.621 ************************************ 00:34:49.621 END TEST nvmf_dif 00:34:49.621 ************************************ 00:34:49.621 23:20:21 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:49.621 23:20:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:49.621 23:20:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:49.621 23:20:21 -- common/autotest_common.sh@10 -- # set +x 00:34:49.621 ************************************ 00:34:49.621 START TEST nvmf_abort_qd_sizes 00:34:49.621 ************************************ 00:34:49.621 23:20:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:49.621 * Looking for test storage... 00:34:49.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:49.621 23:20:21 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:49.621 23:20:21 -- nvmf/common.sh@7 -- # uname -s 00:34:49.621 23:20:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:49.621 23:20:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:49.621 23:20:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:49.621 23:20:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:49.621 23:20:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:49.621 23:20:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:49.621 23:20:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:49.621 23:20:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:49.621 23:20:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:49.621 23:20:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:49.621 23:20:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:34:49.621 23:20:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:34:49.621 23:20:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:49.621 23:20:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:49.621 23:20:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:49.621 23:20:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:49.621 23:20:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:49.621 23:20:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:49.621 23:20:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:49.621 23:20:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.621 23:20:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.621 23:20:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.622 23:20:21 -- paths/export.sh@5 -- # export PATH 00:34:49.622 23:20:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.622 23:20:21 -- nvmf/common.sh@46 -- # : 0 00:34:49.622 23:20:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:49.622 23:20:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:49.622 23:20:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:49.622 23:20:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:49.622 23:20:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:49.622 23:20:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:49.622 23:20:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:49.622 23:20:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:49.622 23:20:21 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:34:49.622 23:20:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:49.622 23:20:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:49.622 23:20:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:49.622 23:20:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:49.622 23:20:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:49.622 23:20:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.622 23:20:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:49.622 23:20:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.622 23:20:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:34:49.622 23:20:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:34:49.622 23:20:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:34:49.622 23:20:21 -- common/autotest_common.sh@10 -- # set +x 00:34:56.183 23:20:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:56.183 23:20:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:34:56.183 23:20:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:34:56.183 23:20:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:34:56.183 23:20:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:34:56.183 23:20:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:34:56.183 23:20:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:34:56.183 23:20:27 -- nvmf/common.sh@294 -- # net_devs=() 00:34:56.183 23:20:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:34:56.183 23:20:27 -- nvmf/common.sh@295 -- # e810=() 00:34:56.183 23:20:27 -- nvmf/common.sh@295 -- # local -ga e810 00:34:56.183 23:20:28 -- nvmf/common.sh@296 -- # x722=() 00:34:56.183 23:20:28 -- nvmf/common.sh@296 -- # local -ga x722 00:34:56.183 23:20:28 -- nvmf/common.sh@297 -- # mlx=() 00:34:56.183 23:20:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:34:56.183 23:20:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:56.183 23:20:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:56.183 23:20:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:56.183 23:20:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:56.183 23:20:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:56.183 23:20:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:56.183 23:20:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:56.183 23:20:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:56.183 23:20:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:56.183 23:20:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:56.183 23:20:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:56.183 23:20:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:34:56.183 23:20:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:34:56.183 23:20:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:34:56.183 23:20:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:34:56.183 23:20:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:34:56.183 23:20:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:34:56.183 23:20:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:56.183 23:20:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:56.183 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:56.183 23:20:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:56.183 23:20:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:56.183 23:20:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.183 23:20:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.183 23:20:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:56.183 23:20:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:56.183 23:20:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:56.183 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:56.183 23:20:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:56.183 23:20:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:56.183 23:20:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.183 23:20:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.183 23:20:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:56.183 23:20:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:34:56.183 23:20:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:34:56.183 23:20:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:34:56.183 23:20:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:56.183 23:20:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.183 23:20:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:56.183 23:20:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.183 23:20:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:56.183 Found net devices under 0000:af:00.0: cvl_0_0 00:34:56.183 23:20:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.184 23:20:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:56.184 23:20:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.184 23:20:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:56.184 23:20:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.184 23:20:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:56.184 Found net devices under 0000:af:00.1: cvl_0_1 00:34:56.184 23:20:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.184 23:20:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:34:56.184 23:20:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:34:56.184 23:20:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:34:56.184 23:20:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:34:56.184 23:20:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:34:56.184 23:20:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:56.184 23:20:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:56.184 23:20:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:56.184 23:20:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:34:56.184 23:20:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:56.184 23:20:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:56.184 23:20:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:34:56.184 23:20:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:56.184 23:20:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:56.184 23:20:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:34:56.184 23:20:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:34:56.184 23:20:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:34:56.184 23:20:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:56.184 23:20:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:56.184 23:20:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:56.184 23:20:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:34:56.184 23:20:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:56.184 23:20:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:56.184 23:20:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:56.184 23:20:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:34:56.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:56.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:34:56.184 00:34:56.184 --- 10.0.0.2 ping statistics --- 00:34:56.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.184 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:34:56.184 23:20:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:56.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:56.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:34:56.184 00:34:56.184 --- 10.0.0.1 ping statistics --- 00:34:56.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.184 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:34:56.184 23:20:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:56.184 23:20:28 -- nvmf/common.sh@410 -- # return 0 00:34:56.184 23:20:28 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:34:56.184 23:20:28 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:59.544 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:59.544 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:59.544 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:59.544 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:59.544 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:59.544 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:59.544 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:59.544 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:59.544 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:59.544 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:59.544 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:59.544 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:59.544 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:59.544 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:59.544 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:59.544 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:00.922 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:35:00.922 23:20:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:00.922 23:20:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:35:00.922 23:20:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:35:00.922 23:20:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:00.922 23:20:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:35:00.922 23:20:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:35:00.922 23:20:33 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:35:00.922 23:20:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:35:00.922 23:20:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:00.922 23:20:33 -- common/autotest_common.sh@10 -- # set +x 00:35:00.922 23:20:33 -- nvmf/common.sh@469 -- # nvmfpid=3446609 00:35:00.922 23:20:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:00.922 23:20:33 -- nvmf/common.sh@470 -- # waitforlisten 3446609 00:35:00.922 23:20:33 -- common/autotest_common.sh@819 -- # '[' -z 3446609 ']' 00:35:00.922 23:20:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:00.922 23:20:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:00.922 23:20:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:00.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:00.922 23:20:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:00.922 23:20:33 -- common/autotest_common.sh@10 -- # set +x 00:35:00.922 [2024-07-24 23:20:33.330767] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:35:00.922 [2024-07-24 23:20:33.330815] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:01.181 EAL: No free 2048 kB hugepages reported on node 1 00:35:01.181 [2024-07-24 23:20:33.406909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:01.181 [2024-07-24 23:20:33.446803] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:35:01.181 [2024-07-24 23:20:33.446919] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:01.181 [2024-07-24 23:20:33.446929] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:01.181 [2024-07-24 23:20:33.446937] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:01.181 [2024-07-24 23:20:33.446984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:01.181 [2024-07-24 23:20:33.447077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:01.181 [2024-07-24 23:20:33.447152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:01.181 [2024-07-24 23:20:33.447154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:01.749 23:20:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:01.749 23:20:34 -- common/autotest_common.sh@852 -- # return 0 00:35:01.749 23:20:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:35:01.749 23:20:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:01.749 23:20:34 -- common/autotest_common.sh@10 -- # set +x 00:35:01.749 23:20:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:01.749 23:20:34 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:01.749 23:20:34 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:35:01.749 23:20:34 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:35:01.749 23:20:34 -- scripts/common.sh@311 -- # local bdf bdfs 00:35:01.749 23:20:34 -- scripts/common.sh@312 -- # local nvmes 00:35:01.749 23:20:34 -- scripts/common.sh@314 -- # [[ -n 0000:d8:00.0 ]] 00:35:01.749 23:20:34 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:01.749 23:20:34 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:35:01.749 23:20:34 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:35:02.009 23:20:34 -- scripts/common.sh@322 -- # uname -s 00:35:02.009 23:20:34 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:35:02.009 23:20:34 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:35:02.009 23:20:34 -- scripts/common.sh@327 -- # (( 1 )) 00:35:02.009 23:20:34 -- scripts/common.sh@328 -- # printf '%s\n' 0000:d8:00.0 00:35:02.009 23:20:34 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:35:02.009 23:20:34 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:d8:00.0 00:35:02.009 23:20:34 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:35:02.009 23:20:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:02.009 23:20:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:02.009 23:20:34 -- common/autotest_common.sh@10 -- # set +x 00:35:02.009 ************************************ 00:35:02.009 START TEST spdk_target_abort 00:35:02.009 ************************************ 00:35:02.009 23:20:34 -- common/autotest_common.sh@1104 -- # spdk_target 00:35:02.009 23:20:34 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:02.009 23:20:34 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:35:02.009 23:20:34 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:d8:00.0 -b spdk_target 00:35:02.009 23:20:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:02.009 23:20:34 -- common/autotest_common.sh@10 -- # set +x 00:35:05.299 spdk_targetn1 00:35:05.299 23:20:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:05.299 23:20:37 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:05.299 23:20:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:05.299 23:20:37 -- common/autotest_common.sh@10 -- # set +x 00:35:05.299 [2024-07-24 23:20:37.040394] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:05.299 23:20:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:05.299 23:20:37 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:35:05.299 23:20:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:05.299 23:20:37 -- common/autotest_common.sh@10 -- # set +x 00:35:05.299 23:20:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:05.299 23:20:37 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:35:05.299 23:20:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:05.299 23:20:37 -- common/autotest_common.sh@10 -- # set +x 00:35:05.299 23:20:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:05.299 23:20:37 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:35:05.299 23:20:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:05.299 23:20:37 -- common/autotest_common.sh@10 -- # set +x 00:35:05.299 [2024-07-24 23:20:37.076669] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:05.299 23:20:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:05.300 23:20:37 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:35:05.300 23:20:37 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:05.300 23:20:37 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:05.300 23:20:37 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:05.300 23:20:37 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:05.300 23:20:37 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:35:05.300 23:20:37 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:05.300 23:20:37 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:05.300 23:20:37 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:05.300 23:20:37 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:05.300 23:20:37 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:05.300 23:20:37 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:05.300 23:20:37 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:05.300 23:20:37 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:05.300 23:20:37 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:05.300 23:20:37 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:05.300 23:20:37 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:05.300 23:20:37 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:05.300 23:20:37 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:05.300 23:20:37 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:05.300 23:20:37 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:05.300 EAL: No free 2048 kB hugepages reported on node 1 00:35:07.835 Initializing NVMe Controllers 00:35:07.835 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:07.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:07.835 Initialization complete. Launching workers. 00:35:07.835 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10360, failed: 0 00:35:07.835 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1418, failed to submit 8942 00:35:07.835 success 863, unsuccess 555, failed 0 00:35:07.835 23:20:40 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:07.835 23:20:40 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:08.094 EAL: No free 2048 kB hugepages reported on node 1 00:35:11.383 [2024-07-24 23:20:43.452448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1569920 is same with the state(5) to be set 00:35:11.383 [2024-07-24 23:20:43.452485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1569920 is same with the state(5) to be set 00:35:11.383 [2024-07-24 23:20:43.452496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1569920 is same with the state(5) to be set 00:35:11.383 [2024-07-24 23:20:43.452505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1569920 is same with the state(5) to be set 00:35:11.383 [2024-07-24 23:20:43.452514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1569920 is same with the state(5) to be set 00:35:11.383 [2024-07-24 23:20:43.452523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1569920 is same with the state(5) to be set 00:35:11.383 [2024-07-24 23:20:43.452532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1569920 is same with the state(5) to be set 00:35:11.383 [2024-07-24 23:20:43.452541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1569920 is same with the state(5) to be set 00:35:11.383 [2024-07-24 23:20:43.452550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1569920 is same with the state(5) to be set 00:35:11.383 [2024-07-24 23:20:43.452559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1569920 is same with the state(5) to be set 00:35:11.383 [2024-07-24 23:20:43.452568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1569920 is same with the state(5) to be set 00:35:11.383 [2024-07-24 23:20:43.452576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1569920 is same with the state(5) to be set 00:35:11.383 [2024-07-24 23:20:43.452590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1569920 is same with the state(5) to be set 00:35:11.383 Initializing NVMe Controllers 00:35:11.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:11.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:11.383 Initialization complete. Launching workers. 00:35:11.383 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8626, failed: 0 00:35:11.383 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1203, failed to submit 7423 00:35:11.383 success 357, unsuccess 846, failed 0 00:35:11.383 23:20:43 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:11.383 23:20:43 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:11.383 EAL: No free 2048 kB hugepages reported on node 1 00:35:14.672 Initializing NVMe Controllers 00:35:14.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:14.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:14.672 Initialization complete. Launching workers. 00:35:14.672 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 39284, failed: 0 00:35:14.672 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2733, failed to submit 36551 00:35:14.672 success 594, unsuccess 2139, failed 0 00:35:14.673 23:20:46 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:35:14.673 23:20:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:14.673 23:20:46 -- common/autotest_common.sh@10 -- # set +x 00:35:14.673 23:20:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:14.673 23:20:46 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:14.673 23:20:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:14.673 23:20:46 -- common/autotest_common.sh@10 -- # set +x 00:35:16.577 23:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:16.577 23:20:48 -- target/abort_qd_sizes.sh@62 -- # killprocess 3446609 00:35:16.577 23:20:48 -- common/autotest_common.sh@926 -- # '[' -z 3446609 ']' 00:35:16.577 23:20:48 -- common/autotest_common.sh@930 -- # kill -0 3446609 00:35:16.577 23:20:48 -- common/autotest_common.sh@931 -- # uname 00:35:16.577 23:20:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:16.577 23:20:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3446609 00:35:16.577 23:20:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:16.577 23:20:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:16.577 23:20:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3446609' 00:35:16.577 killing process with pid 3446609 00:35:16.577 23:20:48 -- common/autotest_common.sh@945 -- # kill 3446609 00:35:16.577 23:20:48 -- common/autotest_common.sh@950 -- # wait 3446609 00:35:16.577 00:35:16.577 real 0m14.689s 00:35:16.577 user 0m58.093s 00:35:16.577 sys 0m2.772s 00:35:16.577 23:20:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:16.577 23:20:48 -- common/autotest_common.sh@10 -- # set +x 00:35:16.577 ************************************ 00:35:16.577 END TEST spdk_target_abort 00:35:16.577 ************************************ 00:35:16.577 23:20:48 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:35:16.577 23:20:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:16.577 23:20:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:16.577 23:20:48 -- common/autotest_common.sh@10 -- # set +x 00:35:16.577 ************************************ 00:35:16.577 START TEST kernel_target_abort 00:35:16.577 ************************************ 00:35:16.577 23:20:48 -- common/autotest_common.sh@1104 -- # kernel_target 00:35:16.577 23:20:48 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:35:16.577 23:20:48 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:35:16.577 23:20:48 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:35:16.577 23:20:48 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:35:16.577 23:20:48 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:35:16.577 23:20:48 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:16.577 23:20:48 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:16.577 23:20:48 -- nvmf/common.sh@627 -- # local block nvme 00:35:16.577 23:20:48 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:35:16.577 23:20:48 -- nvmf/common.sh@630 -- # modprobe nvmet 00:35:16.577 23:20:48 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:16.577 23:20:48 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:19.864 Waiting for block devices as requested 00:35:19.864 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:19.864 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:19.864 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:19.864 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:19.864 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:19.864 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:19.864 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:20.151 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:20.151 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:20.151 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:20.410 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:20.410 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:20.410 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:20.410 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:20.668 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:20.668 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:20.668 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:35:20.927 23:20:53 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:35:20.927 23:20:53 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:20.927 23:20:53 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:35:20.927 23:20:53 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:35:20.927 23:20:53 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:20.927 No valid GPT data, bailing 00:35:20.927 23:20:53 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:20.927 23:20:53 -- scripts/common.sh@393 -- # pt= 00:35:20.927 23:20:53 -- scripts/common.sh@394 -- # return 1 00:35:20.927 23:20:53 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:35:20.927 23:20:53 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:35:20.927 23:20:53 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:35:20.927 23:20:53 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:20.927 23:20:53 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:20.927 23:20:53 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:35:20.927 23:20:53 -- nvmf/common.sh@654 -- # echo 1 00:35:20.927 23:20:53 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:35:20.927 23:20:53 -- nvmf/common.sh@656 -- # echo 1 00:35:20.927 23:20:53 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:35:20.927 23:20:53 -- nvmf/common.sh@663 -- # echo tcp 00:35:20.927 23:20:53 -- nvmf/common.sh@664 -- # echo 4420 00:35:20.927 23:20:53 -- nvmf/common.sh@665 -- # echo ipv4 00:35:20.927 23:20:53 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:20.927 23:20:53 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:35:21.184 00:35:21.184 Discovery Log Number of Records 2, Generation counter 2 00:35:21.184 =====Discovery Log Entry 0====== 00:35:21.184 trtype: tcp 00:35:21.184 adrfam: ipv4 00:35:21.184 subtype: current discovery subsystem 00:35:21.184 treq: not specified, sq flow control disable supported 00:35:21.184 portid: 1 00:35:21.184 trsvcid: 4420 00:35:21.184 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:21.184 traddr: 10.0.0.1 00:35:21.184 eflags: none 00:35:21.184 sectype: none 00:35:21.184 =====Discovery Log Entry 1====== 00:35:21.184 trtype: tcp 00:35:21.184 adrfam: ipv4 00:35:21.184 subtype: nvme subsystem 00:35:21.184 treq: not specified, sq flow control disable supported 00:35:21.184 portid: 1 00:35:21.184 trsvcid: 4420 00:35:21.184 subnqn: kernel_target 00:35:21.184 traddr: 10.0.0.1 00:35:21.184 eflags: none 00:35:21.184 sectype: none 00:35:21.184 23:20:53 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:35:21.184 23:20:53 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:21.184 23:20:53 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:21.184 23:20:53 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:21.184 23:20:53 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:21.184 23:20:53 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:35:21.184 23:20:53 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:21.184 23:20:53 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:21.184 23:20:53 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:21.184 23:20:53 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:21.184 23:20:53 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:21.184 23:20:53 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:21.184 23:20:53 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:21.184 23:20:53 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:21.184 23:20:53 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:21.184 23:20:53 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:21.184 23:20:53 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:21.184 23:20:53 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:21.184 23:20:53 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:21.184 23:20:53 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:21.184 23:20:53 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:21.184 EAL: No free 2048 kB hugepages reported on node 1 00:35:24.468 Initializing NVMe Controllers 00:35:24.468 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:24.468 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:24.468 Initialization complete. Launching workers. 00:35:24.468 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 75277, failed: 0 00:35:24.468 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 75277, failed to submit 0 00:35:24.468 success 0, unsuccess 75277, failed 0 00:35:24.468 23:20:56 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:24.468 23:20:56 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:24.468 EAL: No free 2048 kB hugepages reported on node 1 00:35:27.756 Initializing NVMe Controllers 00:35:27.756 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:27.756 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:27.756 Initialization complete. Launching workers. 00:35:27.756 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 129762, failed: 0 00:35:27.756 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 32754, failed to submit 97008 00:35:27.756 success 0, unsuccess 32754, failed 0 00:35:27.756 23:20:59 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:27.756 23:20:59 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:27.756 EAL: No free 2048 kB hugepages reported on node 1 00:35:30.289 Initializing NVMe Controllers 00:35:30.289 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:30.289 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:30.289 Initialization complete. Launching workers. 00:35:30.289 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 123056, failed: 0 00:35:30.289 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 30778, failed to submit 92278 00:35:30.289 success 0, unsuccess 30778, failed 0 00:35:30.289 23:21:02 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:35:30.289 23:21:02 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:35:30.289 23:21:02 -- nvmf/common.sh@677 -- # echo 0 00:35:30.289 23:21:02 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:35:30.289 23:21:02 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:30.289 23:21:02 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:30.289 23:21:02 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:35:30.289 23:21:02 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:35:30.289 23:21:02 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:35:30.289 00:35:30.289 real 0m13.784s 00:35:30.289 user 0m6.414s 00:35:30.289 sys 0m3.649s 00:35:30.289 23:21:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:30.289 23:21:02 -- common/autotest_common.sh@10 -- # set +x 00:35:30.289 ************************************ 00:35:30.289 END TEST kernel_target_abort 00:35:30.289 ************************************ 00:35:30.548 23:21:02 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:35:30.548 23:21:02 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:35:30.548 23:21:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:30.548 23:21:02 -- nvmf/common.sh@116 -- # sync 00:35:30.548 23:21:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:30.548 23:21:02 -- nvmf/common.sh@119 -- # set +e 00:35:30.548 23:21:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:30.548 23:21:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:30.548 rmmod nvme_tcp 00:35:30.548 rmmod nvme_fabrics 00:35:30.548 rmmod nvme_keyring 00:35:30.548 23:21:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:30.548 23:21:02 -- nvmf/common.sh@123 -- # set -e 00:35:30.548 23:21:02 -- nvmf/common.sh@124 -- # return 0 00:35:30.548 23:21:02 -- nvmf/common.sh@477 -- # '[' -n 3446609 ']' 00:35:30.548 23:21:02 -- nvmf/common.sh@478 -- # killprocess 3446609 00:35:30.548 23:21:02 -- common/autotest_common.sh@926 -- # '[' -z 3446609 ']' 00:35:30.548 23:21:02 -- common/autotest_common.sh@930 -- # kill -0 3446609 00:35:30.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3446609) - No such process 00:35:30.548 23:21:02 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3446609 is not found' 00:35:30.548 Process with pid 3446609 is not found 00:35:30.548 23:21:02 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:35:30.548 23:21:02 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:33.832 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:33.832 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:33.832 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:33.832 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:33.832 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:33.832 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:33.832 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:33.832 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:33.832 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:33.832 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:33.832 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:33.832 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:33.832 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:33.832 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:33.832 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:33.832 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:33.832 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:35:33.832 23:21:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:33.832 23:21:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:33.832 23:21:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:33.832 23:21:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:33.832 23:21:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:33.832 23:21:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:33.832 23:21:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:36.365 23:21:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:35:36.365 00:35:36.365 real 0m46.706s 00:35:36.365 user 1m9.058s 00:35:36.365 sys 0m16.224s 00:35:36.365 23:21:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:36.365 23:21:08 -- common/autotest_common.sh@10 -- # set +x 00:35:36.365 ************************************ 00:35:36.365 END TEST nvmf_abort_qd_sizes 00:35:36.365 ************************************ 00:35:36.365 23:21:08 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:36.365 23:21:08 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:36.365 23:21:08 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:36.365 23:21:08 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:36.365 23:21:08 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:36.365 23:21:08 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:36.365 23:21:08 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:36.365 23:21:08 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:36.365 23:21:08 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:36.365 23:21:08 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:36.365 23:21:08 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:36.365 23:21:08 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:36.365 23:21:08 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:36.365 23:21:08 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:36.365 23:21:08 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:35:36.365 23:21:08 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:35:36.365 23:21:08 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:35:36.365 23:21:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:36.365 23:21:08 -- common/autotest_common.sh@10 -- # set +x 00:35:36.365 23:21:08 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:35:36.365 23:21:08 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:35:36.365 23:21:08 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:35:36.365 23:21:08 -- common/autotest_common.sh@10 -- # set +x 00:35:42.917 INFO: APP EXITING 00:35:42.917 INFO: killing all VMs 00:35:42.917 INFO: killing vhost app 00:35:42.917 INFO: EXIT DONE 00:35:45.504 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:45.504 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:45.504 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:45.504 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:45.504 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:45.504 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:45.504 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:45.504 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:45.504 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:45.504 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:45.504 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:45.504 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:45.761 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:45.761 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:45.761 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:45.761 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:45.761 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:35:49.039 Cleaning 00:35:49.039 Removing: /var/run/dpdk/spdk0/config 00:35:49.039 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:49.039 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:49.039 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:49.039 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:49.039 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:49.039 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:49.039 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:49.039 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:49.039 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:49.039 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:49.039 Removing: /var/run/dpdk/spdk1/config 00:35:49.039 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:49.039 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:49.039 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:49.298 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:49.298 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:49.298 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:49.298 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:49.298 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:49.298 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:49.298 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:49.298 Removing: /var/run/dpdk/spdk1/mp_socket 00:35:49.298 Removing: /var/run/dpdk/spdk2/config 00:35:49.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:49.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:49.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:49.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:49.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:49.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:49.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:49.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:49.298 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:49.298 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:49.298 Removing: /var/run/dpdk/spdk3/config 00:35:49.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:49.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:49.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:49.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:49.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:49.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:49.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:49.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:49.298 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:49.298 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:49.298 Removing: /var/run/dpdk/spdk4/config 00:35:49.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:49.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:49.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:49.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:49.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:49.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:49.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:49.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:49.298 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:49.298 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:49.298 Removing: /dev/shm/bdev_svc_trace.1 00:35:49.298 Removing: /dev/shm/nvmf_trace.0 00:35:49.298 Removing: /dev/shm/spdk_tgt_trace.pid3021458 00:35:49.298 Removing: /var/run/dpdk/spdk0 00:35:49.298 Removing: /var/run/dpdk/spdk1 00:35:49.298 Removing: /var/run/dpdk/spdk2 00:35:49.298 Removing: /var/run/dpdk/spdk3 00:35:49.298 Removing: /var/run/dpdk/spdk4 00:35:49.298 Removing: /var/run/dpdk/spdk_pid3018985 00:35:49.298 Removing: /var/run/dpdk/spdk_pid3020245 00:35:49.298 Removing: /var/run/dpdk/spdk_pid3021458 00:35:49.298 Removing: /var/run/dpdk/spdk_pid3022195 00:35:49.298 Removing: /var/run/dpdk/spdk_pid3023686 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3025118 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3025430 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3025756 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3026085 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3026367 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3026503 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3026731 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3027036 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3027897 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3031614 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3031917 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3032213 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3032340 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3032785 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3032948 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3033367 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3033525 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3033769 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3033940 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3034233 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3034247 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3034830 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3034966 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3035227 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3035519 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3035599 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3035852 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3036034 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3036231 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3036424 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3036705 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3036978 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3037257 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3037521 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3037764 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3037912 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3038129 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3038377 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3038664 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3038932 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3039213 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3039473 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3039658 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3039805 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3040067 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3040338 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3040617 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3040883 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3041171 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3041348 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3041535 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3041739 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3042023 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3042287 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3042568 00:35:49.557 Removing: /var/run/dpdk/spdk_pid3042840 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3043066 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3043214 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3043433 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3043696 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3043980 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3044256 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3044538 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3044774 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3044971 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3045130 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3045391 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3045684 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3046044 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3049875 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3134824 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3139355 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3149814 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3155909 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3160169 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3160812 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3166985 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3167023 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3168056 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3168862 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3169796 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3170439 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3170474 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3170743 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3170753 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3170758 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3171799 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3172624 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3173478 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3174137 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3174234 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3174499 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3175665 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3176789 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3185544 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3185834 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3190131 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3196821 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3199458 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3210058 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3219604 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3221457 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3222285 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3239818 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3243802 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3249167 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3250794 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3252825 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3252948 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3253203 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3253474 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3254063 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3255941 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3257003 00:35:49.815 Removing: /var/run/dpdk/spdk_pid3257464 00:35:50.072 Removing: /var/run/dpdk/spdk_pid3263274 00:35:50.072 Removing: /var/run/dpdk/spdk_pid3269107 00:35:50.072 Removing: /var/run/dpdk/spdk_pid3274167 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3312685 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3316903 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3323431 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3325227 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3326731 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3331292 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3335584 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3343346 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3343392 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3348181 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3348431 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3348694 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3349102 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3349209 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3350875 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3352717 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3354321 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3356110 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3357782 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3359448 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3365765 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3366216 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3369006 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3369958 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3376731 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3379519 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3385291 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3391303 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3397198 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3397698 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3398340 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3398895 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3399744 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3400322 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3401103 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3401662 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3406196 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3406471 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3413135 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3413441 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3415729 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3423945 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3424037 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3429420 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3431531 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3433550 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3434636 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3436731 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3437923 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3447374 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3447871 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3448429 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3450917 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3451381 00:35:50.073 Removing: /var/run/dpdk/spdk_pid3451848 00:35:50.073 Clean 00:35:50.330 killing process with pid 2969956 00:36:02.508 killing process with pid 2969953 00:36:02.508 killing process with pid 2969955 00:36:02.508 killing process with pid 2969954 00:36:02.508 23:21:33 -- common/autotest_common.sh@1436 -- # return 0 00:36:02.508 23:21:33 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:36:02.508 23:21:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:02.508 23:21:33 -- common/autotest_common.sh@10 -- # set +x 00:36:02.508 23:21:33 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:36:02.508 23:21:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:02.508 23:21:33 -- common/autotest_common.sh@10 -- # set +x 00:36:02.508 23:21:33 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:02.508 23:21:33 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:02.508 23:21:33 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:02.508 23:21:33 -- spdk/autotest.sh@394 -- # hash lcov 00:36:02.508 23:21:33 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:02.508 23:21:33 -- spdk/autotest.sh@396 -- # hostname 00:36:02.508 23:21:33 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-22 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:02.508 geninfo: WARNING: invalid characters removed from testname! 00:36:20.569 23:21:52 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:23.198 23:21:55 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:24.572 23:21:56 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:25.947 23:21:58 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:27.848 23:21:59 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:29.221 23:22:01 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:31.121 23:22:03 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:31.121 23:22:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:31.121 23:22:03 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:31.121 23:22:03 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:31.121 23:22:03 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:31.121 23:22:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.121 23:22:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.121 23:22:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.121 23:22:03 -- paths/export.sh@5 -- $ export PATH 00:36:31.121 23:22:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.121 23:22:03 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:36:31.121 23:22:03 -- common/autobuild_common.sh@438 -- $ date +%s 00:36:31.121 23:22:03 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721856123.XXXXXX 00:36:31.121 23:22:03 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721856123.5oLwKB 00:36:31.121 23:22:03 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:36:31.121 23:22:03 -- common/autobuild_common.sh@444 -- $ '[' -n v23.11 ']' 00:36:31.121 23:22:03 -- common/autobuild_common.sh@445 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:36:31.121 23:22:03 -- common/autobuild_common.sh@445 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:36:31.121 23:22:03 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:31.121 23:22:03 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:31.121 23:22:03 -- common/autobuild_common.sh@454 -- $ get_config_params 00:36:31.121 23:22:03 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:36:31.121 23:22:03 -- common/autotest_common.sh@10 -- $ set +x 00:36:31.121 23:22:03 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:36:31.121 23:22:03 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:36:31.121 23:22:03 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:31.121 23:22:03 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:31.121 23:22:03 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:36:31.121 23:22:03 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:31.121 23:22:03 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:31.121 23:22:03 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:31.121 23:22:03 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:31.121 23:22:03 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:31.121 23:22:03 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:31.121 + [[ -n 2915539 ]] 00:36:31.121 + sudo kill 2915539 00:36:31.128 [Pipeline] } 00:36:31.140 [Pipeline] // stage 00:36:31.145 [Pipeline] } 00:36:31.158 [Pipeline] // timeout 00:36:31.162 [Pipeline] } 00:36:31.175 [Pipeline] // catchError 00:36:31.178 [Pipeline] } 00:36:31.188 [Pipeline] // wrap 00:36:31.192 [Pipeline] } 00:36:31.201 [Pipeline] // catchError 00:36:31.209 [Pipeline] stage 00:36:31.211 [Pipeline] { (Epilogue) 00:36:31.222 [Pipeline] catchError 00:36:31.224 [Pipeline] { 00:36:31.235 [Pipeline] echo 00:36:31.236 Cleanup processes 00:36:31.241 [Pipeline] sh 00:36:31.520 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:31.520 3467236 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:31.531 [Pipeline] sh 00:36:31.808 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:31.808 ++ grep -v 'sudo pgrep' 00:36:31.808 ++ awk '{print $1}' 00:36:31.808 + sudo kill -9 00:36:31.808 + true 00:36:31.820 [Pipeline] sh 00:36:32.098 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:32.098 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:36:40.199 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:36:42.748 [Pipeline] sh 00:36:43.029 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:43.029 Artifacts sizes are good 00:36:43.044 [Pipeline] archiveArtifacts 00:36:43.051 Archiving artifacts 00:36:43.217 [Pipeline] sh 00:36:43.500 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:43.515 [Pipeline] cleanWs 00:36:43.526 [WS-CLEANUP] Deleting project workspace... 00:36:43.526 [WS-CLEANUP] Deferred wipeout is used... 00:36:43.532 [WS-CLEANUP] done 00:36:43.534 [Pipeline] } 00:36:43.555 [Pipeline] // catchError 00:36:43.566 [Pipeline] sh 00:36:43.845 + logger -p user.info -t JENKINS-CI 00:36:43.855 [Pipeline] } 00:36:43.871 [Pipeline] // stage 00:36:43.877 [Pipeline] } 00:36:43.894 [Pipeline] // node 00:36:43.900 [Pipeline] End of Pipeline 00:36:43.932 Finished: SUCCESS